linux journal | system administration special issue | 2009 · 2012-05-10 · 5 special_issue.tar.gz...

46
Billix: the Sysadmin Toolkit for Your Pocket Authentication with Fedora Directory Server Remote Desktop Administration with OpenSSH or VNC Manage Configuration Files with Cfengine Setting Up a PXE Boot Server Network Monitoring with Zenoss 2009 | Supplement Issue | www.linuxjournal.com Since 1994: The Original Magazine of the Linux Community SPECIAL SYSTEM ADMINISTRATION ISSUE Cfengine VNC Billix Zenoss Thin Clients MySQL Heartbeat MAY THE SOURCE BE WITH YOU

Upload: others

Post on 26-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

trade

Billix the Sysadmin Toolkit for Your Pocket

Authentication with Fedora Directory Server

Remote Desktop Administration with OpenSSH or VNC

Manage Configuration Files with Cfengine

Setting Up a PXE Boot Server

Network Monitoring with Zenoss

2009 | Supplement Issue | wwwlinuxjournalcom

Since 1994 The Original Magazine of the Linux Community

SPECIAL SYSTEMADMINISTRATION

ISSUE

Cfengine VNC Billix Zenoss Thin Clients MySQL Heartbeat

MAY THESOURCE BE WITH YOU

5 SPECIAL_ISSUETARGZSystem Administration Instant Gratification Edition

Shawn Powers

6 CFENGINE FOR ENTERPRISE CONFIGURATION MANAGEMENTHow to use cfengine to manage configuration files across large numbers of machines

Scott Lackey

10 SECURED REMOTE DESKTOPAPPLICATION SESSIONSDifferent ways to control a Linux system over a network

Mick Bauer

14 BILLIX A SYSADMINrsquoS SWISS ARMY KNIFEBuild a toolbox in your pocket by installing Billix on that spare USB key

Bill Childers

17 ZENOSS AND THE ART OF NETWORK MONITORINGStay on top of your network with an enterprise-class monitoring tool

Jeramiah Bowling

22 THIN CLIENTS BOOTING OVER A WIRELESS BRIDGESetting up a thin-client network and some useful operationadministration tools

Ronan Skehill Alan Dunne

and John Nelson

26 PXE MAGIC FLEXIBLE NETWORK BOOTING WITH MENUSWhat if you never had to carry around an install or rescue CD again Set up a PXE boot server with menus and put them all on the network

Kyle Rankin

30 CREATING VPNS WITH IPSEC AND SSLTLSThe two most common and current techniques for creating VPNs

Rami Rosen

34 MYSQL 5 STORED PROCEDURES RELIC OR REVOLUTIONDo MySQL 5 Stored Procedures produce tiers of joy or sorrow

Guy Harrison

38 GETTING STARTED WITH HEARTBEATAvailability in a heartbeat

Daniel Bartholomew

42 FEDORA DIRECTORY SERVER THE EVOLUTION OF LINUX AUTHENTICATIONWant an alternative to OpenLDAP

Jeramiah Bowling

CONTENTS SPECIAL SYSTEMADMINISTRATION ISSUE

2 | www l inux journa l com

Linux Journal2009 Lineup

JANUARYSecurity

FEBRUARYWeb Development

MARCHDesktop

APRILSystem Administration

MAYCool Projects

JUNEReaders Choice Awards

JULYMobile Linux

AUGUSTKernel Capers

SEPTEMBERCross-PlatformDevelopment

OCTOBERHack This

NOVEMBERInfrastructure

DECEMBEREmbedded

17 ZENOSS

USPS LINUX JOURNAL (ISSN 1075-3583) (USPS 12854) is published monthly by Belltown Media Inc 2211 Norfolk Ste 514 Houston TX 77098 USA Periodicals postage paid at Houston Texas and at additional mail-ing offices Cover price is $599 US Sub scrip tion rate is $2950year in the United States $3950 in Canada and Mexico $6950 elsewhere POSTMASTER Please send address changes to Linux Journal PO Box 16476North Hollywood CA 91615 Subscriptions start with the next issue Canada Post Publications Mail Agreement 41549519 Canada Returns to be sent to Bleuchip International PO Box 25542 London ON N6C 6B2

wwwlinuxjournalcom

At Your Service

MAGAZINEPRINT SUBSCRIPTIONS Renewing yoursubscription changing your address paying yourinvoice viewing your account details or othersubscription inquiries can instantly be done on-linewwwlinuxjournalcomsubs Alternativelywithin the US and Canada you may call us toll-free 1-888-66-LINUX (54689) orinternationally +1-818-487-2089 E-mail us atsubslinuxjournalcom or reach us via postal mailLinux Journal PO Box 16476 North Hollywood CA91615-9911 USA Please remember to include your complete name and address when contacting us

DIGITAL SUBSCRIPTIONS Digital subscriptionsof Linux Journal are now available and delivered asPDFs anywhere in the world for one low costVisit wwwlinuxjournalcomdigital for moreinformation or use the contact information abovefor any digital magazine customer service inquiries

LETTERS TO THE EDITOR We welcomeyour letters and encourage you to submit themat wwwlinuxjournalcomcontact or mailthem to Linux Journal 1752 NW MarketStreet 200 Seattle WA 98107 USA Lettersmay be edited for space and clarity

WRITING FOR US We always are looking for contributed articles tutorials and real-world stories for the magazine An authorrsquosguide a list of topics and due dates can befound on-line wwwlinuxjournalcomauthor

ADVERTISING Linux Journal is a greatresource for readers and advertisers alikeRequest a media kit view our current editorial calendar and advertising due dates or learn more about other advertising and marketing opportunities by visiting us on-line wwwlinuxjournalcomadvertisingContact us directly for further information adslinuxjournalcom or +1 713-344-1956 ext 2

ON-LINEWEB SITE Read exclusive on-line-only content onLinux Journalrsquos Web site wwwlinuxjournalcomAlso select articles from the print magazine are available on-line Magazine subscribersdigital or print receive full access to issuearchives please contact Customer Service forfurther information subslinuxjournalcom

FREE e-NEWSLETTERS Each week LinuxJournal editors will tell you whats hot in the worldof Linux Receive late-breaking news technical tipsand tricks and links to in-depth stories featuredon wwwlinuxjournalcom Subscribe for freetoday wwwlinuxjournalcomenewsletters

Executive Editor

Associate Editor

Associate Editor

Senior Editor

Art Director

Products Editor

Editor Emeritus

Technical Editor

Senior Columnist

Chef Franccedilais

Security Editor

Proofreader

Publisher

General Manager

Sales Manager

Sales and Marketing Coordinator

Circulation Director

Webmistress

Accountant

Jill Franklin jilllinuxjournalcomShawn PowersshawnlinuxjournalcomMitch FraziermitchlinuxjournalcomDoc SearlsdoclinuxjournalcomGarrick AntikajiangarricklinuxjournalcomJames GraynewproductslinuxjournalcomDon MartidmartilinuxjournalcomMichael BaxtermabcruziocomReuven LernerreuvenlernercoilMarcel GagneacutemggagnesalmarcomMick Bauermickvisicom

Geri Gale

Carlie Fairchildpublisherlinuxjournalcom

Rebecca Cassityrebeccalinuxjournalcom

Joseph KrackjosephlinuxjournalcomTracy Manfordtracylinuxjournalcom

Mark Irgangmarklinuxjournalcom

Katherine Druckmanwebmistresslinuxjournalcom

Candy Beauchampacctlinuxjournalcom

Contributing EditorsDavid A Bandel bull Ibrahim Haddad bull Robert Love bull Zack Brown bull Dave Phillips bull Marco Fioretti

Ludovic Marcotte bull Paul Barry bull Paul McKenney bull Dave Taylor bull Dirk Elmendorf

Linux Journal is published by and is a registered trade name of Belltown Media IncPO Box 980985 Houston TX 77098 USA

Reader Advisory PanelBrad Abram Baillio bull Nick Baronian bull Hari Boukis bull Caleb S Cullen bull Steve Case

Kalyana Krishna Chadalavada bull Keir Davis bull Adam M Dutko bull Michael Eager bull Nick Faltys bull Ken FirestoneDennis Franklin Frey bull Victor Gregorio bull Kristian Erik bull Hermansen bull Philip Jacob bull Jay KruizengaDavid A Lane bull Steve Marquez bull Dave McAllister bull Craig Oda bull Rob Orsini bull Jeffrey D Parent

Wayne D Powel bull Shawn Powers bull Mike Roberts bull Draciron Smith bull Chris D Stark bull Patrick Swartz

Editorial Advisory BoardDaniel Frye Director IBM Linux Technology CenterJon ldquomaddogrdquo Hall President Linux International

Lawrence Lessig Professor of Law Stanford UniversityRansom Love Director of Strategic Relationships Family and Church History Department

Church of Jesus Christ of Latter-day SaintsSam OckmanBruce Perens

Bdale Garbee Linux CTO HPDanese Cooper Open Source Diva Intel Corporation

AdvertisingE-MAIL adslinuxjournalcom

URL wwwlinuxjournalcomadvertisingPHONE +1 713-344-1956 ext 2

SubscriptionsE-MAIL subslinuxjournalcom

URL wwwlinuxjournalcomsubscribePHONE +1 818-487-2089

FAX +1 818-487-4550TOLL-FREE 1-888-66-LINUX

MAIL PO Box 16476 North Hollywood CA 91615-9911 USAPlease allow 4ndash6 weeks for processing address changes and orders

PRINTED IN USA

LINUX is a registered trademark of Linus Torvalds

wwwl inux journa l com | 5

Special_Issuetargz

SHAWN POWERS

Welcome to the Linux Journal familyThe issue you currently hold in yourhands or more precisely the issue

yoursquore reading on your screen was createdspecifically for you Itrsquos so frustrating to subscribeto a magazine and then wait a month or twobefore you ever get to read your first issue Youdecided to subscribe so we decided to give yousome instant gratification Because really isnrsquotthat the best kind

This entire issue is dedicated to systemadministration As a Linux professional or aneveryday penguin fan yoursquore most likely alreadya sysadmin of some sort Sure some of us are incharge of administering thousands of computersand some of us just keep our single desktoprunning With Linux though it doesnrsquot matterhow big or small your infrastructure might be Ifyou want to run a MySQL server on your laptopthatrsquos not a problem In fact in this issue weeven talk about stored procedures on MySQL 5so wersquoll show you how to get the most out ofthat laptop server

If you do manage lots of machines howeverthere are some tools that can make your job mucheasier Cfengine for instance can help withconfiguration management for tens hundredsor even thousands of remote computers It surebeats running to every server and configuringthem individually But what if you mess upthose configuration files and half your serversgo off-line Donrsquot worry wersquoll teach you aboutHeartbeat Itrsquos one of those high-availabilityprograms that helps you keep going evenwhen some of your servers die If a computerdies alone in your server room does anyonehear it scream Heartbeat does

Do you have more than one server roomMore than one location Many of us doPrograms like Zenoss can help you monitoryour whole organization from one centralplace In the following pages wersquoll show youall about it In fact using a VPN (which wetalk about here) and remote desktop sessions

(again covered here) you probably can manage most of your network without evenleaving home If you havenrsquot reconfigured aserver from a coffee shop across the countryyou havenrsquot lived

I do understand however that lots ofsysadmins enjoy going in to work Thatrsquosgreat we like office coffee too Keep readingto find many hands-on tools that will makeyour work day more efficient more productiveand even more fun Billix for example is a USB-booting Linux distribution that has tons of greatfeatures ready to use It can install operatingsystems rescue broken computers and if youwant to do something it doesnrsquot do by defaultthatrsquos okay the article explains how to includeany custom tools you might want

Whether itrsquos booting thin clients over awireless bridge or creating custom PXE bootmenus the life of a system administrator isnever boring At Linux Journal we try tomake your job easier month after month withproduct reviews tech tips games (you knowfor your lunch break) and in-depth articles likeyoursquoll read here Whether yoursquore a command-line junkie or a GUI guru Linux has the toolsavailable to make your life easier

We hope you enjoy this bonus issue ofLinux Journal If you read this whole thing andare still anxious to get your hands on moreLinux and open-source-related information butyour first paper issue hasnrsquot arrived yet donrsquotforget about our Web site Check us out atwwwlinuxjournalcom If you manage to readthe thousands and thousands of on-line articlesand still donrsquot have your first issue I suggestchecking with the post office There must besomething wrong with your mailbox

Shawn Powers is the Associate Editor for Linux Journal Hersquos also the GadgetGuy for LinuxJournalcom and he has an interesting collection of vintageGarfield coffee mugs Donrsquot let his silly hairdo fool you hersquos a prettyordinary guy and can be reached via e-mail at shawnlinuxjournalcomOr swing by the linuxjournal IRC channel on Freenodenet

System AdministrationInstant GratificationEdition

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 2: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

5 SPECIAL_ISSUETARGZSystem Administration Instant Gratification Edition

Shawn Powers

6 CFENGINE FOR ENTERPRISE CONFIGURATION MANAGEMENTHow to use cfengine to manage configuration files across large numbers of machines

Scott Lackey

10 SECURED REMOTE DESKTOPAPPLICATION SESSIONSDifferent ways to control a Linux system over a network

Mick Bauer

14 BILLIX A SYSADMINrsquoS SWISS ARMY KNIFEBuild a toolbox in your pocket by installing Billix on that spare USB key

Bill Childers

17 ZENOSS AND THE ART OF NETWORK MONITORINGStay on top of your network with an enterprise-class monitoring tool

Jeramiah Bowling

22 THIN CLIENTS BOOTING OVER A WIRELESS BRIDGESetting up a thin-client network and some useful operationadministration tools

Ronan Skehill Alan Dunne

and John Nelson

26 PXE MAGIC FLEXIBLE NETWORK BOOTING WITH MENUSWhat if you never had to carry around an install or rescue CD again Set up a PXE boot server with menus and put them all on the network

Kyle Rankin

30 CREATING VPNS WITH IPSEC AND SSLTLSThe two most common and current techniques for creating VPNs

Rami Rosen

34 MYSQL 5 STORED PROCEDURES RELIC OR REVOLUTIONDo MySQL 5 Stored Procedures produce tiers of joy or sorrow

Guy Harrison

38 GETTING STARTED WITH HEARTBEATAvailability in a heartbeat

Daniel Bartholomew

42 FEDORA DIRECTORY SERVER THE EVOLUTION OF LINUX AUTHENTICATIONWant an alternative to OpenLDAP

Jeramiah Bowling

CONTENTS SPECIAL SYSTEMADMINISTRATION ISSUE

2 | www l inux journa l com

Linux Journal2009 Lineup

JANUARYSecurity

FEBRUARYWeb Development

MARCHDesktop

APRILSystem Administration

MAYCool Projects

JUNEReaders Choice Awards

JULYMobile Linux

AUGUSTKernel Capers

SEPTEMBERCross-PlatformDevelopment

OCTOBERHack This

NOVEMBERInfrastructure

DECEMBEREmbedded

17 ZENOSS

USPS LINUX JOURNAL (ISSN 1075-3583) (USPS 12854) is published monthly by Belltown Media Inc 2211 Norfolk Ste 514 Houston TX 77098 USA Periodicals postage paid at Houston Texas and at additional mail-ing offices Cover price is $599 US Sub scrip tion rate is $2950year in the United States $3950 in Canada and Mexico $6950 elsewhere POSTMASTER Please send address changes to Linux Journal PO Box 16476North Hollywood CA 91615 Subscriptions start with the next issue Canada Post Publications Mail Agreement 41549519 Canada Returns to be sent to Bleuchip International PO Box 25542 London ON N6C 6B2

wwwlinuxjournalcom

At Your Service

MAGAZINEPRINT SUBSCRIPTIONS Renewing yoursubscription changing your address paying yourinvoice viewing your account details or othersubscription inquiries can instantly be done on-linewwwlinuxjournalcomsubs Alternativelywithin the US and Canada you may call us toll-free 1-888-66-LINUX (54689) orinternationally +1-818-487-2089 E-mail us atsubslinuxjournalcom or reach us via postal mailLinux Journal PO Box 16476 North Hollywood CA91615-9911 USA Please remember to include your complete name and address when contacting us

DIGITAL SUBSCRIPTIONS Digital subscriptionsof Linux Journal are now available and delivered asPDFs anywhere in the world for one low costVisit wwwlinuxjournalcomdigital for moreinformation or use the contact information abovefor any digital magazine customer service inquiries

LETTERS TO THE EDITOR We welcomeyour letters and encourage you to submit themat wwwlinuxjournalcomcontact or mailthem to Linux Journal 1752 NW MarketStreet 200 Seattle WA 98107 USA Lettersmay be edited for space and clarity

WRITING FOR US We always are looking for contributed articles tutorials and real-world stories for the magazine An authorrsquosguide a list of topics and due dates can befound on-line wwwlinuxjournalcomauthor

ADVERTISING Linux Journal is a greatresource for readers and advertisers alikeRequest a media kit view our current editorial calendar and advertising due dates or learn more about other advertising and marketing opportunities by visiting us on-line wwwlinuxjournalcomadvertisingContact us directly for further information adslinuxjournalcom or +1 713-344-1956 ext 2

ON-LINEWEB SITE Read exclusive on-line-only content onLinux Journalrsquos Web site wwwlinuxjournalcomAlso select articles from the print magazine are available on-line Magazine subscribersdigital or print receive full access to issuearchives please contact Customer Service forfurther information subslinuxjournalcom

FREE e-NEWSLETTERS Each week LinuxJournal editors will tell you whats hot in the worldof Linux Receive late-breaking news technical tipsand tricks and links to in-depth stories featuredon wwwlinuxjournalcom Subscribe for freetoday wwwlinuxjournalcomenewsletters

Executive Editor

Associate Editor

Associate Editor

Senior Editor

Art Director

Products Editor

Editor Emeritus

Technical Editor

Senior Columnist

Chef Franccedilais

Security Editor

Proofreader

Publisher

General Manager

Sales Manager

Sales and Marketing Coordinator

Circulation Director

Webmistress

Accountant

Jill Franklin jilllinuxjournalcomShawn PowersshawnlinuxjournalcomMitch FraziermitchlinuxjournalcomDoc SearlsdoclinuxjournalcomGarrick AntikajiangarricklinuxjournalcomJames GraynewproductslinuxjournalcomDon MartidmartilinuxjournalcomMichael BaxtermabcruziocomReuven LernerreuvenlernercoilMarcel GagneacutemggagnesalmarcomMick Bauermickvisicom

Geri Gale

Carlie Fairchildpublisherlinuxjournalcom

Rebecca Cassityrebeccalinuxjournalcom

Joseph KrackjosephlinuxjournalcomTracy Manfordtracylinuxjournalcom

Mark Irgangmarklinuxjournalcom

Katherine Druckmanwebmistresslinuxjournalcom

Candy Beauchampacctlinuxjournalcom

Contributing EditorsDavid A Bandel bull Ibrahim Haddad bull Robert Love bull Zack Brown bull Dave Phillips bull Marco Fioretti

Ludovic Marcotte bull Paul Barry bull Paul McKenney bull Dave Taylor bull Dirk Elmendorf

Linux Journal is published by and is a registered trade name of Belltown Media IncPO Box 980985 Houston TX 77098 USA

Reader Advisory PanelBrad Abram Baillio bull Nick Baronian bull Hari Boukis bull Caleb S Cullen bull Steve Case

Kalyana Krishna Chadalavada bull Keir Davis bull Adam M Dutko bull Michael Eager bull Nick Faltys bull Ken FirestoneDennis Franklin Frey bull Victor Gregorio bull Kristian Erik bull Hermansen bull Philip Jacob bull Jay KruizengaDavid A Lane bull Steve Marquez bull Dave McAllister bull Craig Oda bull Rob Orsini bull Jeffrey D Parent

Wayne D Powel bull Shawn Powers bull Mike Roberts bull Draciron Smith bull Chris D Stark bull Patrick Swartz

Editorial Advisory BoardDaniel Frye Director IBM Linux Technology CenterJon ldquomaddogrdquo Hall President Linux International

Lawrence Lessig Professor of Law Stanford UniversityRansom Love Director of Strategic Relationships Family and Church History Department

Church of Jesus Christ of Latter-day SaintsSam OckmanBruce Perens

Bdale Garbee Linux CTO HPDanese Cooper Open Source Diva Intel Corporation

AdvertisingE-MAIL adslinuxjournalcom

URL wwwlinuxjournalcomadvertisingPHONE +1 713-344-1956 ext 2

SubscriptionsE-MAIL subslinuxjournalcom

URL wwwlinuxjournalcomsubscribePHONE +1 818-487-2089

FAX +1 818-487-4550TOLL-FREE 1-888-66-LINUX

MAIL PO Box 16476 North Hollywood CA 91615-9911 USAPlease allow 4ndash6 weeks for processing address changes and orders

PRINTED IN USA

LINUX is a registered trademark of Linus Torvalds

wwwl inux journa l com | 5

Special_Issuetargz

SHAWN POWERS

Welcome to the Linux Journal familyThe issue you currently hold in yourhands or more precisely the issue

yoursquore reading on your screen was createdspecifically for you Itrsquos so frustrating to subscribeto a magazine and then wait a month or twobefore you ever get to read your first issue Youdecided to subscribe so we decided to give yousome instant gratification Because really isnrsquotthat the best kind

This entire issue is dedicated to systemadministration As a Linux professional or aneveryday penguin fan yoursquore most likely alreadya sysadmin of some sort Sure some of us are incharge of administering thousands of computersand some of us just keep our single desktoprunning With Linux though it doesnrsquot matterhow big or small your infrastructure might be Ifyou want to run a MySQL server on your laptopthatrsquos not a problem In fact in this issue weeven talk about stored procedures on MySQL 5so wersquoll show you how to get the most out ofthat laptop server

If you do manage lots of machines howeverthere are some tools that can make your job mucheasier Cfengine for instance can help withconfiguration management for tens hundredsor even thousands of remote computers It surebeats running to every server and configuringthem individually But what if you mess upthose configuration files and half your serversgo off-line Donrsquot worry wersquoll teach you aboutHeartbeat Itrsquos one of those high-availabilityprograms that helps you keep going evenwhen some of your servers die If a computerdies alone in your server room does anyonehear it scream Heartbeat does

Do you have more than one server roomMore than one location Many of us doPrograms like Zenoss can help you monitoryour whole organization from one centralplace In the following pages wersquoll show youall about it In fact using a VPN (which wetalk about here) and remote desktop sessions

(again covered here) you probably can manage most of your network without evenleaving home If you havenrsquot reconfigured aserver from a coffee shop across the countryyou havenrsquot lived

I do understand however that lots ofsysadmins enjoy going in to work Thatrsquosgreat we like office coffee too Keep readingto find many hands-on tools that will makeyour work day more efficient more productiveand even more fun Billix for example is a USB-booting Linux distribution that has tons of greatfeatures ready to use It can install operatingsystems rescue broken computers and if youwant to do something it doesnrsquot do by defaultthatrsquos okay the article explains how to includeany custom tools you might want

Whether itrsquos booting thin clients over awireless bridge or creating custom PXE bootmenus the life of a system administrator isnever boring At Linux Journal we try tomake your job easier month after month withproduct reviews tech tips games (you knowfor your lunch break) and in-depth articles likeyoursquoll read here Whether yoursquore a command-line junkie or a GUI guru Linux has the toolsavailable to make your life easier

We hope you enjoy this bonus issue ofLinux Journal If you read this whole thing andare still anxious to get your hands on moreLinux and open-source-related information butyour first paper issue hasnrsquot arrived yet donrsquotforget about our Web site Check us out atwwwlinuxjournalcom If you manage to readthe thousands and thousands of on-line articlesand still donrsquot have your first issue I suggestchecking with the post office There must besomething wrong with your mailbox

Shawn Powers is the Associate Editor for Linux Journal Hersquos also the GadgetGuy for LinuxJournalcom and he has an interesting collection of vintageGarfield coffee mugs Donrsquot let his silly hairdo fool you hersquos a prettyordinary guy and can be reached via e-mail at shawnlinuxjournalcomOr swing by the linuxjournal IRC channel on Freenodenet

System AdministrationInstant GratificationEdition

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 3: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

wwwlinuxjournalcom

At Your Service

MAGAZINEPRINT SUBSCRIPTIONS Renewing yoursubscription changing your address paying yourinvoice viewing your account details or othersubscription inquiries can instantly be done on-linewwwlinuxjournalcomsubs Alternativelywithin the US and Canada you may call us toll-free 1-888-66-LINUX (54689) orinternationally +1-818-487-2089 E-mail us atsubslinuxjournalcom or reach us via postal mailLinux Journal PO Box 16476 North Hollywood CA91615-9911 USA Please remember to include your complete name and address when contacting us

DIGITAL SUBSCRIPTIONS Digital subscriptionsof Linux Journal are now available and delivered asPDFs anywhere in the world for one low costVisit wwwlinuxjournalcomdigital for moreinformation or use the contact information abovefor any digital magazine customer service inquiries

LETTERS TO THE EDITOR We welcomeyour letters and encourage you to submit themat wwwlinuxjournalcomcontact or mailthem to Linux Journal 1752 NW MarketStreet 200 Seattle WA 98107 USA Lettersmay be edited for space and clarity

WRITING FOR US We always are looking for contributed articles tutorials and real-world stories for the magazine An authorrsquosguide a list of topics and due dates can befound on-line wwwlinuxjournalcomauthor

ADVERTISING Linux Journal is a greatresource for readers and advertisers alikeRequest a media kit view our current editorial calendar and advertising due dates or learn more about other advertising and marketing opportunities by visiting us on-line wwwlinuxjournalcomadvertisingContact us directly for further information adslinuxjournalcom or +1 713-344-1956 ext 2

ON-LINEWEB SITE Read exclusive on-line-only content onLinux Journalrsquos Web site wwwlinuxjournalcomAlso select articles from the print magazine are available on-line Magazine subscribersdigital or print receive full access to issuearchives please contact Customer Service forfurther information subslinuxjournalcom

FREE e-NEWSLETTERS Each week LinuxJournal editors will tell you whats hot in the worldof Linux Receive late-breaking news technical tipsand tricks and links to in-depth stories featuredon wwwlinuxjournalcom Subscribe for freetoday wwwlinuxjournalcomenewsletters

Executive Editor

Associate Editor

Associate Editor

Senior Editor

Art Director

Products Editor

Editor Emeritus

Technical Editor

Senior Columnist

Chef Franccedilais

Security Editor

Proofreader

Publisher

General Manager

Sales Manager

Sales and Marketing Coordinator

Circulation Director

Webmistress

Accountant

Jill Franklin jilllinuxjournalcomShawn PowersshawnlinuxjournalcomMitch FraziermitchlinuxjournalcomDoc SearlsdoclinuxjournalcomGarrick AntikajiangarricklinuxjournalcomJames GraynewproductslinuxjournalcomDon MartidmartilinuxjournalcomMichael BaxtermabcruziocomReuven LernerreuvenlernercoilMarcel GagneacutemggagnesalmarcomMick Bauermickvisicom

Geri Gale

Carlie Fairchildpublisherlinuxjournalcom

Rebecca Cassityrebeccalinuxjournalcom

Joseph KrackjosephlinuxjournalcomTracy Manfordtracylinuxjournalcom

Mark Irgangmarklinuxjournalcom

Katherine Druckmanwebmistresslinuxjournalcom

Candy Beauchampacctlinuxjournalcom

Contributing EditorsDavid A Bandel bull Ibrahim Haddad bull Robert Love bull Zack Brown bull Dave Phillips bull Marco Fioretti

Ludovic Marcotte bull Paul Barry bull Paul McKenney bull Dave Taylor bull Dirk Elmendorf

Linux Journal is published by and is a registered trade name of Belltown Media IncPO Box 980985 Houston TX 77098 USA

Reader Advisory PanelBrad Abram Baillio bull Nick Baronian bull Hari Boukis bull Caleb S Cullen bull Steve Case

Kalyana Krishna Chadalavada bull Keir Davis bull Adam M Dutko bull Michael Eager bull Nick Faltys bull Ken FirestoneDennis Franklin Frey bull Victor Gregorio bull Kristian Erik bull Hermansen bull Philip Jacob bull Jay KruizengaDavid A Lane bull Steve Marquez bull Dave McAllister bull Craig Oda bull Rob Orsini bull Jeffrey D Parent

Wayne D Powel bull Shawn Powers bull Mike Roberts bull Draciron Smith bull Chris D Stark bull Patrick Swartz

Editorial Advisory BoardDaniel Frye Director IBM Linux Technology CenterJon ldquomaddogrdquo Hall President Linux International

Lawrence Lessig Professor of Law Stanford UniversityRansom Love Director of Strategic Relationships Family and Church History Department

Church of Jesus Christ of Latter-day SaintsSam OckmanBruce Perens

Bdale Garbee Linux CTO HPDanese Cooper Open Source Diva Intel Corporation

AdvertisingE-MAIL adslinuxjournalcom

URL wwwlinuxjournalcomadvertisingPHONE +1 713-344-1956 ext 2

SubscriptionsE-MAIL subslinuxjournalcom

URL wwwlinuxjournalcomsubscribePHONE +1 818-487-2089

FAX +1 818-487-4550TOLL-FREE 1-888-66-LINUX

MAIL PO Box 16476 North Hollywood CA 91615-9911 USAPlease allow 4ndash6 weeks for processing address changes and orders

PRINTED IN USA

LINUX is a registered trademark of Linus Torvalds

wwwl inux journa l com | 5

Special_Issuetargz

SHAWN POWERS

Welcome to the Linux Journal familyThe issue you currently hold in yourhands or more precisely the issue

yoursquore reading on your screen was createdspecifically for you Itrsquos so frustrating to subscribeto a magazine and then wait a month or twobefore you ever get to read your first issue Youdecided to subscribe so we decided to give yousome instant gratification Because really isnrsquotthat the best kind

This entire issue is dedicated to systemadministration As a Linux professional or aneveryday penguin fan yoursquore most likely alreadya sysadmin of some sort Sure some of us are incharge of administering thousands of computersand some of us just keep our single desktoprunning With Linux though it doesnrsquot matterhow big or small your infrastructure might be Ifyou want to run a MySQL server on your laptopthatrsquos not a problem In fact in this issue weeven talk about stored procedures on MySQL 5so wersquoll show you how to get the most out ofthat laptop server

If you do manage lots of machines howeverthere are some tools that can make your job mucheasier Cfengine for instance can help withconfiguration management for tens hundredsor even thousands of remote computers It surebeats running to every server and configuringthem individually But what if you mess upthose configuration files and half your serversgo off-line Donrsquot worry wersquoll teach you aboutHeartbeat Itrsquos one of those high-availabilityprograms that helps you keep going evenwhen some of your servers die If a computerdies alone in your server room does anyonehear it scream Heartbeat does

Do you have more than one server roomMore than one location Many of us doPrograms like Zenoss can help you monitoryour whole organization from one centralplace In the following pages wersquoll show youall about it In fact using a VPN (which wetalk about here) and remote desktop sessions

(again covered here) you probably can manage most of your network without evenleaving home If you havenrsquot reconfigured aserver from a coffee shop across the countryyou havenrsquot lived

I do understand however that lots ofsysadmins enjoy going in to work Thatrsquosgreat we like office coffee too Keep readingto find many hands-on tools that will makeyour work day more efficient more productiveand even more fun Billix for example is a USB-booting Linux distribution that has tons of greatfeatures ready to use It can install operatingsystems rescue broken computers and if youwant to do something it doesnrsquot do by defaultthatrsquos okay the article explains how to includeany custom tools you might want

Whether itrsquos booting thin clients over awireless bridge or creating custom PXE bootmenus the life of a system administrator isnever boring At Linux Journal we try tomake your job easier month after month withproduct reviews tech tips games (you knowfor your lunch break) and in-depth articles likeyoursquoll read here Whether yoursquore a command-line junkie or a GUI guru Linux has the toolsavailable to make your life easier

We hope you enjoy this bonus issue ofLinux Journal If you read this whole thing andare still anxious to get your hands on moreLinux and open-source-related information butyour first paper issue hasnrsquot arrived yet donrsquotforget about our Web site Check us out atwwwlinuxjournalcom If you manage to readthe thousands and thousands of on-line articlesand still donrsquot have your first issue I suggestchecking with the post office There must besomething wrong with your mailbox

Shawn Powers is the Associate Editor for Linux Journal Hersquos also the GadgetGuy for LinuxJournalcom and he has an interesting collection of vintageGarfield coffee mugs Donrsquot let his silly hairdo fool you hersquos a prettyordinary guy and can be reached via e-mail at shawnlinuxjournalcomOr swing by the linuxjournal IRC channel on Freenodenet

System AdministrationInstant GratificationEdition

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 4: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

At Your Service

MAGAZINEPRINT SUBSCRIPTIONS Renewing yoursubscription changing your address paying yourinvoice viewing your account details or othersubscription inquiries can instantly be done on-linewwwlinuxjournalcomsubs Alternativelywithin the US and Canada you may call us toll-free 1-888-66-LINUX (54689) orinternationally +1-818-487-2089 E-mail us atsubslinuxjournalcom or reach us via postal mailLinux Journal PO Box 16476 North Hollywood CA91615-9911 USA Please remember to include your complete name and address when contacting us

DIGITAL SUBSCRIPTIONS Digital subscriptionsof Linux Journal are now available and delivered asPDFs anywhere in the world for one low costVisit wwwlinuxjournalcomdigital for moreinformation or use the contact information abovefor any digital magazine customer service inquiries

LETTERS TO THE EDITOR We welcomeyour letters and encourage you to submit themat wwwlinuxjournalcomcontact or mailthem to Linux Journal 1752 NW MarketStreet 200 Seattle WA 98107 USA Lettersmay be edited for space and clarity

WRITING FOR US We always are looking for contributed articles tutorials and real-world stories for the magazine An authorrsquosguide a list of topics and due dates can befound on-line wwwlinuxjournalcomauthor

ADVERTISING Linux Journal is a greatresource for readers and advertisers alikeRequest a media kit view our current editorial calendar and advertising due dates or learn more about other advertising and marketing opportunities by visiting us on-line wwwlinuxjournalcomadvertisingContact us directly for further information adslinuxjournalcom or +1 713-344-1956 ext 2

ON-LINEWEB SITE Read exclusive on-line-only content onLinux Journalrsquos Web site wwwlinuxjournalcomAlso select articles from the print magazine are available on-line Magazine subscribersdigital or print receive full access to issuearchives please contact Customer Service forfurther information subslinuxjournalcom

FREE e-NEWSLETTERS Each week LinuxJournal editors will tell you whats hot in the worldof Linux Receive late-breaking news technical tipsand tricks and links to in-depth stories featuredon wwwlinuxjournalcom Subscribe for freetoday wwwlinuxjournalcomenewsletters

Executive Editor

Associate Editor

Associate Editor

Senior Editor

Art Director

Products Editor

Editor Emeritus

Technical Editor

Senior Columnist

Chef Franccedilais

Security Editor

Proofreader

Publisher

General Manager

Sales Manager

Sales and Marketing Coordinator

Circulation Director

Webmistress

Accountant

Jill Franklin jilllinuxjournalcomShawn PowersshawnlinuxjournalcomMitch FraziermitchlinuxjournalcomDoc SearlsdoclinuxjournalcomGarrick AntikajiangarricklinuxjournalcomJames GraynewproductslinuxjournalcomDon MartidmartilinuxjournalcomMichael BaxtermabcruziocomReuven LernerreuvenlernercoilMarcel GagneacutemggagnesalmarcomMick Bauermickvisicom

Geri Gale

Carlie Fairchildpublisherlinuxjournalcom

Rebecca Cassityrebeccalinuxjournalcom

Joseph KrackjosephlinuxjournalcomTracy Manfordtracylinuxjournalcom

Mark Irgangmarklinuxjournalcom

Katherine Druckmanwebmistresslinuxjournalcom

Candy Beauchampacctlinuxjournalcom

Contributing EditorsDavid A Bandel bull Ibrahim Haddad bull Robert Love bull Zack Brown bull Dave Phillips bull Marco Fioretti

Ludovic Marcotte bull Paul Barry bull Paul McKenney bull Dave Taylor bull Dirk Elmendorf

Linux Journal is published by and is a registered trade name of Belltown Media IncPO Box 980985 Houston TX 77098 USA

Reader Advisory PanelBrad Abram Baillio bull Nick Baronian bull Hari Boukis bull Caleb S Cullen bull Steve Case

Kalyana Krishna Chadalavada bull Keir Davis bull Adam M Dutko bull Michael Eager bull Nick Faltys bull Ken FirestoneDennis Franklin Frey bull Victor Gregorio bull Kristian Erik bull Hermansen bull Philip Jacob bull Jay KruizengaDavid A Lane bull Steve Marquez bull Dave McAllister bull Craig Oda bull Rob Orsini bull Jeffrey D Parent

Wayne D Powel bull Shawn Powers bull Mike Roberts bull Draciron Smith bull Chris D Stark bull Patrick Swartz

Editorial Advisory BoardDaniel Frye Director IBM Linux Technology CenterJon ldquomaddogrdquo Hall President Linux International

Lawrence Lessig Professor of Law Stanford UniversityRansom Love Director of Strategic Relationships Family and Church History Department

Church of Jesus Christ of Latter-day SaintsSam OckmanBruce Perens

Bdale Garbee Linux CTO HPDanese Cooper Open Source Diva Intel Corporation

AdvertisingE-MAIL adslinuxjournalcom

URL wwwlinuxjournalcomadvertisingPHONE +1 713-344-1956 ext 2

SubscriptionsE-MAIL subslinuxjournalcom

URL wwwlinuxjournalcomsubscribePHONE +1 818-487-2089

FAX +1 818-487-4550TOLL-FREE 1-888-66-LINUX

MAIL PO Box 16476 North Hollywood CA 91615-9911 USAPlease allow 4ndash6 weeks for processing address changes and orders

PRINTED IN USA

LINUX is a registered trademark of Linus Torvalds

wwwl inux journa l com | 5

Special_Issuetargz

SHAWN POWERS

Welcome to the Linux Journal familyThe issue you currently hold in yourhands or more precisely the issue

yoursquore reading on your screen was createdspecifically for you Itrsquos so frustrating to subscribeto a magazine and then wait a month or twobefore you ever get to read your first issue Youdecided to subscribe so we decided to give yousome instant gratification Because really isnrsquotthat the best kind

This entire issue is dedicated to systemadministration As a Linux professional or aneveryday penguin fan yoursquore most likely alreadya sysadmin of some sort Sure some of us are incharge of administering thousands of computersand some of us just keep our single desktoprunning With Linux though it doesnrsquot matterhow big or small your infrastructure might be Ifyou want to run a MySQL server on your laptopthatrsquos not a problem In fact in this issue weeven talk about stored procedures on MySQL 5so wersquoll show you how to get the most out ofthat laptop server

If you do manage lots of machines howeverthere are some tools that can make your job mucheasier Cfengine for instance can help withconfiguration management for tens hundredsor even thousands of remote computers It surebeats running to every server and configuringthem individually But what if you mess upthose configuration files and half your serversgo off-line Donrsquot worry wersquoll teach you aboutHeartbeat Itrsquos one of those high-availabilityprograms that helps you keep going evenwhen some of your servers die If a computerdies alone in your server room does anyonehear it scream Heartbeat does

Do you have more than one server roomMore than one location Many of us doPrograms like Zenoss can help you monitoryour whole organization from one centralplace In the following pages wersquoll show youall about it In fact using a VPN (which wetalk about here) and remote desktop sessions

(again covered here) you probably can manage most of your network without evenleaving home If you havenrsquot reconfigured aserver from a coffee shop across the countryyou havenrsquot lived

I do understand however that lots ofsysadmins enjoy going in to work Thatrsquosgreat we like office coffee too Keep readingto find many hands-on tools that will makeyour work day more efficient more productiveand even more fun Billix for example is a USB-booting Linux distribution that has tons of greatfeatures ready to use It can install operatingsystems rescue broken computers and if youwant to do something it doesnrsquot do by defaultthatrsquos okay the article explains how to includeany custom tools you might want

Whether itrsquos booting thin clients over awireless bridge or creating custom PXE bootmenus the life of a system administrator isnever boring At Linux Journal we try tomake your job easier month after month withproduct reviews tech tips games (you knowfor your lunch break) and in-depth articles likeyoursquoll read here Whether yoursquore a command-line junkie or a GUI guru Linux has the toolsavailable to make your life easier

We hope you enjoy this bonus issue ofLinux Journal If you read this whole thing andare still anxious to get your hands on moreLinux and open-source-related information butyour first paper issue hasnrsquot arrived yet donrsquotforget about our Web site Check us out atwwwlinuxjournalcom If you manage to readthe thousands and thousands of on-line articlesand still donrsquot have your first issue I suggestchecking with the post office There must besomething wrong with your mailbox

Shawn Powers is the Associate Editor for Linux Journal Hersquos also the GadgetGuy for LinuxJournalcom and he has an interesting collection of vintageGarfield coffee mugs Donrsquot let his silly hairdo fool you hersquos a prettyordinary guy and can be reached via e-mail at shawnlinuxjournalcomOr swing by the linuxjournal IRC channel on Freenodenet

System AdministrationInstant GratificationEdition

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 5: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

wwwl inux journa l com | 5

Special_Issuetargz

SHAWN POWERS

Welcome to the Linux Journal familyThe issue you currently hold in yourhands or more precisely the issue

yoursquore reading on your screen was createdspecifically for you Itrsquos so frustrating to subscribeto a magazine and then wait a month or twobefore you ever get to read your first issue Youdecided to subscribe so we decided to give yousome instant gratification Because really isnrsquotthat the best kind

This entire issue is dedicated to systemadministration As a Linux professional or aneveryday penguin fan yoursquore most likely alreadya sysadmin of some sort Sure some of us are incharge of administering thousands of computersand some of us just keep our single desktoprunning With Linux though it doesnrsquot matterhow big or small your infrastructure might be Ifyou want to run a MySQL server on your laptopthatrsquos not a problem In fact in this issue weeven talk about stored procedures on MySQL 5so wersquoll show you how to get the most out ofthat laptop server

If you do manage lots of machines howeverthere are some tools that can make your job mucheasier Cfengine for instance can help withconfiguration management for tens hundredsor even thousands of remote computers It surebeats running to every server and configuringthem individually But what if you mess upthose configuration files and half your serversgo off-line Donrsquot worry wersquoll teach you aboutHeartbeat Itrsquos one of those high-availabilityprograms that helps you keep going evenwhen some of your servers die If a computerdies alone in your server room does anyonehear it scream Heartbeat does

Do you have more than one server roomMore than one location Many of us doPrograms like Zenoss can help you monitoryour whole organization from one centralplace In the following pages wersquoll show youall about it In fact using a VPN (which wetalk about here) and remote desktop sessions

(again covered here) you probably can manage most of your network without evenleaving home If you havenrsquot reconfigured aserver from a coffee shop across the countryyou havenrsquot lived

I do understand however that lots ofsysadmins enjoy going in to work Thatrsquosgreat we like office coffee too Keep readingto find many hands-on tools that will makeyour work day more efficient more productiveand even more fun Billix for example is a USB-booting Linux distribution that has tons of greatfeatures ready to use It can install operatingsystems rescue broken computers and if youwant to do something it doesnrsquot do by defaultthatrsquos okay the article explains how to includeany custom tools you might want

Whether itrsquos booting thin clients over awireless bridge or creating custom PXE bootmenus the life of a system administrator isnever boring At Linux Journal we try tomake your job easier month after month withproduct reviews tech tips games (you knowfor your lunch break) and in-depth articles likeyoursquoll read here Whether yoursquore a command-line junkie or a GUI guru Linux has the toolsavailable to make your life easier

We hope you enjoy this bonus issue ofLinux Journal If you read this whole thing andare still anxious to get your hands on moreLinux and open-source-related information butyour first paper issue hasnrsquot arrived yet donrsquotforget about our Web site Check us out atwwwlinuxjournalcom If you manage to readthe thousands and thousands of on-line articlesand still donrsquot have your first issue I suggestchecking with the post office There must besomething wrong with your mailbox

Shawn Powers is the Associate Editor for Linux Journal Hersquos also the GadgetGuy for LinuxJournalcom and he has an interesting collection of vintageGarfield coffee mugs Donrsquot let his silly hairdo fool you hersquos a prettyordinary guy and can be reached via e-mail at shawnlinuxjournalcomOr swing by the linuxjournal IRC channel on Freenodenet

System AdministrationInstant GratificationEdition

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 6: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

6 | www l inux journa l com

Cfengine is known by many system administrators to bean excellent tool to automate manual tasks on UNIX andLinux-based machines It also is the most comprehensiveframework to execute administrative shell scripts acrossmany servers running disparate operating systemsAlthough cfengine is certainly good for these purposes it also is widely considered the best open-source tool available for configuration management Using cfenginesysadmins with a large installation of say 800 machinescan have information about their environment quickly that otherwise would take months to gather as well as the ability to change the environment in an instant For an initial example if you have a set of Linux machines that need to have a different etcnsswitchconf and then have some processes restarted therersquos no need to connectto each machine and perform these steps or even to writea script and run it on the machines once they are identi-fied You simply can tell cfengine that all the Linuxmachines running FedoraDebianCentOS with XGB ofRAM or more need to use a particular etcnsswitchconfuntil a newer one is designated Cfengine can do all thatin a one-line statement

Cfenginersquos configuration management capabilities canwork in several different ways In this article I focus on amake-it-so-and-keep-it-so approach Letrsquos consider a smallhosting company configuration with three administratorsand two data centers (Figure 1)

Each administrator can use a SubversionCVS sandboxto hold repositories for each data center The cfengineclient will run on each client machine either through acron job or a cfengine execution daeligmon and pull thecfengine configuration files appropriate for each machinefrom the server If there is work to be done for that partic-ular machine it will be carried out and reported to theserver If there are configuration files to copy the onesactive on the client host will be replaced by the copies onthe cfengine server (Cfengine will not replace a file if thecopy process is partial or incomplete)

A cfengine implementation has three major components

Version control this usually consists of a versioning systemsuch as CVS or Subversion

Cfengine internal components cfservd cfagent cfexecdcfenvd cfagentconf and updateconf

Cfengine commands processes files shellcommandsgroups editfiles copy and so forth

The cfservd is the master daeligmon configured withetccfservdconf and it listens on port 5803 for connections tothe cfengine server This daeligmon controls security and directoryaccess for all client machines connecting to it cfagent is theclient program for running cfengine on hosts It will run eitherfrom cron manually or from the execution daeligmon for cfenginecfexecd A common method for running the cfagent is to exe-cute it from cron using the cfexecd in non-daeligmon mode Theprimary reason for using both is to engage cfenginersquos loggingsystem This is accomplished using the following

10 varcfenginesbincfexecd -F

as a cron entry on Linux (unless Solaris starts to understand10) Note that this is fairly frequent and good only for a lownumber of servers We donrsquot want 800 servers updating within

Cfengine for EnterpriseConfiguration ManagementCfengine makes it easier to manage configuration files across large numbers of machinesSCOTT LACKEY

SYSTEM ADMINISTRATION

Figure 1 How the Few Control the Many

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 7: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

the same ten minutesThe cfenvd is the ldquoenvironment daeligmonrdquo that runs on the

client side of the cfengine implementation It gathers informa-tion about the host machine such as hostname OS and IPaddress The cfenvd detects these factors about a host anduses them to determine to which groups the machine belongsThis in effect creates a profile for each machine that cfengineuses to determine what work to perform on each host

The master configuration file for each host is cfagentconfThis file can contain all the configuration information andcfengine code for the host a subset of hosts or all hosts in thecfengine network This file is often just a starting point whereall configurations are stored in other files and ldquoimportedrdquo intocfagentconf in a very similar fashion to Nagios configurationfiles The updateconf file is the fundamental configuration filefor the client It primarily just identifies the cfengine server andgets a copy of the cfagentconf

Figure 2 Automated Distribution of Cfengine Files

The updateconf file tells the cfengine server to deploy anew cfagentconf file (and perhaps other files as well) if thecurrent copy on the host machine is different This adds someprotection for a scenario where a corrupt cfagentconf issent out or in case there never was one Although you coulduse cfengine to distribute updateconf it should be copiedmanually to each host

Cfengine ldquocommandsrdquo are not entered on the commandline They make up the syntax of the cfengine configurationlanguage Because cfengine is a framework the systemadministrator must write the necessary commands in cfengineconfiguration files in order to move and manipulate data Asan example letrsquos take a look at the files command as it wouldappear in the cfagentconf file

files

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This would set all machinesrsquo etcpasswd and etcshadowfiles to the permissions listed in the file (644 and 600) It also

would change the owner of the file to root and fix all of thesesettings if they are found to be different each time cfengineruns Itrsquos important to keep in mind that there are no grouplimitations to this particular files command If cfengine doesnot have a group listed for the command it assumes youmean any host This also could be written as

files

any

etcpasswd mode=644

owner=root action=fixall

etcshadow mode=600

owner=root action=fixall

This brings us to an important topic in building a cfengineimplementation groups There is a groups command that canbe used to assign hosts to groups based on various criteriaCustom groups that are created in this way are called softgroups The groups that are filled by the cfenvd daeligmonautomatically are referred to as hard groups To use thegroups feature of cfengine and assign some soft groupssimply create a groupscf file and tell the cfagentconf toimport it somewhere in the beginning of the file

import

any

groupscf

Cfengine will look in the default directory for the groupscffile in varcfengineinputs Now you can create arbitrarygroups based on any criteria It is important to remember thatthe terms groups and classes are completely interchangeablein cfengine

groups

development = ( nfs01 nfs02 100017 )

production = ( app01 app02 development )

You also can combine hard groups that have been discov-ered by cfenvd with soft groups

groups

legacy = ( irix compiled_on_cygwin sco )

Letrsquos get our testing setup in order First install cfengine on aserver and a client or workstation Cfengine has been compiledon almost everything so there should be a package for yourOSdistribution Because the source is usually the latest versionand many versions are bug fixes I recommend compiling it your-self Installing cfengine gives you both the server and client binaries

www l inux journa l com | 7

Because cfengine is a frameworkthe system administrator mustwrite the necessary commands in cfengine configuration files inorder to move and manipulate data

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 8: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

and utilities on every machine so be careful not to run the server daeligmon (cfservd) on a client machine unless you specifically intend to do that After the install youshould have a varcfengine directory and the binariesmentioned previously

Before any host can actually communicate with thecfengine server keys must be exchanged between the twoCfengine keys are similar to SSH keys except they are one-way That is to say both the server and the client must have each otherrsquos public key in order to communi-cate Years of sysadmin paranoia cause me to recommendmanually copying all keys and trusting nothing Copyvarcfengineppkeyslocalhostpub from the server to all theclients and from the clients to the server in the same directoryrenaming them varcfengineppkeysroot-101101pubwhere the IP is 101101

On the server side cfservdconf must be configured toallow clients to access particular directories To do this createan AllowConnectionsFrom and an admit section

cfservdconf

control

AllowConnectionsFrom = ( 1921680024 )

admit

configsdatacenter1 example1com

configsdatacenter2 example2com

To test your example client to see whether it is connectingto the cfengine server make sure port 5803 is clear betweenthem and run the server with

cfservd -v -d2

And on the client run

cfagent -v --no-splay

This will give you a lot of debugging information on theserver side to see whatrsquos working and what isnrsquot

Now letrsquos take a look at distributing a configurationfile Although cfengine has a full-featured file editor in the editfiles command using this method for distributingconfigurations is not advised The copy command willmove a file from the server to the client machine withcfnew appended to the filename Then once the file hasbeen copied completely it renames the file and saves theold copy as cfsaved in the specified directory Herersquos thecopy command syntax

copy

class

ltltmaster-filegtgt

dest=target-file

server=server

mode=mode

owner=owner

group=group

backup=truefalse

repository=backup dir

recurse=numberinf0

define=classlist

Only the dest= is required along with the filename tosave at the destination These can be different Herersquosanother example

copy

linux

$copydirlinuxresolvconf

dest=etcresolvconf

server=cfengineexample1com

mode=644

owner=root

group=root

backup=true

repository=varcfenginecfbackup

recurse=0

define=copiedresolvdotconf

The last line in this copy statement assigns this host to agroup called copiedresolvdotconf Although we donrsquot have to doanything after copying this particular file we may want to dosome action on all hosts that just had this file successfully sent tothem such as sending an e-mail or restarting a process Asanother example if you update a configuration file that isattached to a daeligmon you may want to send a SIGHUP to theprocess to cause it to reread the configuration file This is com-mon with Apachersquos httpdconf or inetdconf If the copy is notsuccessful this server wonrsquot be added to the copiedresolvdotconfclass You can query all servers in the network to see whetherthey are members and if not find out what went wrong

A great way to version control your config files is to use acfengine variable for the filename being copied to control whichversion gets distributed Such a line may look something like this

copy

linux

$copydirlinux$resolv_conf

Or better yet you can use cfenginersquos class-specific vari-ables whose scope is limited to the class with which they areassociated This makes copy statements much more elegantand can simplify changes as your cfengine files scale

control

$resolve_conf value depends on context

8 | www l inux journa l com

SYSTEM ADMINISTRATION

A great way to version controlyour config files is to use a cfenginevariable for the filename beingcopied to control which versiongets distributed

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 9: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

is this a linux machine or hpux

linux resolve_conf = ( $copydirlinuxresolvconf )

hpux resolve_conf = ( $copydirhpuxresolvconf )

copy

linux

$resolve_conf

Here is a full cfagentconf file that makes use of everythingIrsquove covered thus far It also adds some practical examples ofhow to do sysadmin work with cfengine

cfagentconf

control

actionsequence = ( files editfiles processes )

AddInstallable = ( cron_restart )

solaris crontab = ( varspoolcroncrontabsroot

)

linux crontab = ( varspoolcronroot )

files

solaris

$crontab

action=touch

linux

$crontab

action=touch

editfiles

solaris

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

DefineClasses cron_restart

linux

$crontab

AppendIfNoSuchLine 01020304050

varcfenginesbincfexecd -F

linux doesnt need a cron restart

shellcommands

solariscron_restart

etcinitdcron stop

etcinitdcron start

import

any

groupscf

copycf

The above is a full cfagent configuration that addscfengine execution from cron to each client (if itrsquos Linux orSolaris) So effectively once you run cfengine manually for thefirst time with this cfagentconf file cfengine will continue torun every five minutes from that host but you wonrsquot need toedit or restart cron The control section of the cfagentconf iswhere you can define some variables that will control how

cfengine handles the configuration file actionsequencetells cfengine what order to execute each command andAddInstallable is a variable that holds soft groups that getdefined later in the file in a ldquodefinerdquo statement such as afterthe editfiles command where the line is DefineClassescron_restart The reason for using AddInstallable issometimes cfengine skips over groups that are definedafter command execution and defining that group in thecontrol section ensures that the command will be recognizedthroughout the configuration

Being able to check configuration files out from a versioning system and distribute them to a set of servers is a powerful system administration tool A number ofindependent tools will do a subset of cfenginersquos work (suchas rsync ssh and make) but nothing else allows a smallgroup of system administrators to manage such a largegroup of servers Centralizing configuration managementhas the dual benefit of information and control andcfengine provides these benefits in a free open-source toolfor your infrastructure and application environments

Scott Lackey is an independent technology consultant who has developed and deployedconfiguration management solutions across industry from NASA to Wall Street Contact himat slackeyvioletconsultingnet wwwvioletconsultingnet

wwwl inux journa l com | 9

httpwwwlinuxjournalcomrss_feeds

Linux News and HeadlinesDelivered To You

Linux Journal topical RSS feeds NOW AVAILABLE

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 10: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

1 0 | www l inux journa l com

There are many different ways to control a Linux systemover a network and many reasons you might want to do soWhen covering remote control in past columns Irsquove tendedto focus on server-oriented usage scenarios and tools whichis to say on administering servers via text-based applicationssuch as OpenSSH But what about GUI-based applicationsand remote desktops

Remote desktop sessions can be very useful for technicalsupport surveillance and remote control of desktop appli-cations But it isnrsquot always necessary to access an entiredesktop you may need to run only one or two specificgraphical applications

In this monthrsquos column I describe how to use VNC (VirtualNetwork Computing) for remote desktop sessions andOpenSSH with X forwarding for remote access to specificapplications Our focus here of course is on using thesetools securely and I include a healthy dose of opinion as to the relative merits of each

Remote Desktops vs Remote ApplicationsSo which approach should you use remote desktops orremote applications If yoursquove come to Linux from theMicrosoft world you may be tempted to assume that becauseTerminal Services in Windows is so useful you have to havesome sort of remote desktop access in Linux too But thatmay not be the case

Linux and most other UNIX-based operating systems usethe X Window System as the basis for their various graphicalenvironments And the X Window System was designed tobe run over networks In fact it treats your local system asa self-contained network over which different parts of theX Window System communicate

Accordingly itrsquos not only possible but easy to run individualX Window System applications over TCPIP networksmdashthat isto display the output (window) of a remotely executedgraphical application on your local system Because the XWindow Systemrsquos use of networks isnrsquot terribly secure (theX Window System has no native support whatsoever forany kind of encryption) nowadays we usually tunnel XWindow System application windows over the Secure Shell(SSH) especially OpenSSH

The advantage of tunneling individual application windowsis that itrsquos faster and generally more secure than running theentire desktop remotely The disadvantages are that OpenSSHhas a history of security vulnerabilities and for many Linuxnewcomers forwarding graphical applications via commandsentered in a shell session is counterintuitive And besides as I

mentioned earlier remote desktop control (or even justviewing) can be very useful for technical support and forsecurity surveillance

Using OpenSSH with X Window SystemForwardingHaving said all that tunneling X Window System applicationsover OpenSSH may be a lot easier than you imagine All youneed is a client system running an X server (for example aLinux desktop system or a Windows system running theCygwin X server) and a destination system running theOpenSSH daeligmon (sshd)

Note that I didnrsquot say ldquoa destination system running sshdand an X serverrdquo This is because X servers oddly enoughdonrsquot actually run or control X Window System applicationsthey merely display their output Therefore if yoursquore runningan X Window System application on a remote system youneed to run an X server on your local system not on theremote system The application will execute on the remotesystem and send its output to your local X serverrsquos display

Suppose yoursquove got two systems mylaptop andremotebox and you want to monitor system resources on remotebox with the GNOME System Monitor Supposefurther yoursquore running the X Window System on mylaptopand sshd on remotebox

First from a terminal window or xterm on mylaptop yoursquodopen an SSH session like this

mickmylaptop~$ ssh -X admin-slaveremotebox

admin-slaveremoteboxs password

Last login Wed Jun 11 215019 2008 from

dtcla00b674986d

admin-slaveremotebox~$

Note the -X flag in my ssh command This enables XWindow System forwarding for the SSH session In order forthat to work sshd on the remote system must be configuredwith X11Forwarding set to yes in its etcsshsshdconf file Onmany distributions yes is the default setting but check yoursto be sure

Next to run the GNOME System Monitor on remoteboxsuch that its output (window) is displayed on mylaptop simplyexecute it from within the same SSH session

admin-slaveremotebox~$ gnome-system-monitor amp

The trailing ampersand (amp) causes this command to run in

Secured RemoteDesktopApplication SessionsRun graphical applications from afar securely MICK BAUER

SYSTEM ADMINISTRATION

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 11: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

the background so you can initiate other commands from thesame shell session Without this the cursor wonrsquot reappear inyour shell window until you kill the command you just started

At this point the GNOME System Monitor window shouldappear on mylaptoprsquos screen displaying system performanceinformation for remotebox And that really is all there is to it

This technique works for practically any X Window Systemapplication installed on the remote system The only catch isthat you need to know the name of anything you want to runin this waymdashthat is the actual name of the executable file

If yoursquore accustomed to starting your X Window Systemapplications from a menu on your desktop you may notknow the names of their corresponding executables Onequick way to find out is to open your desktop managerrsquosmenu editor and then view the properties screen for theapplication in question

For example on a GNOME desktop you would right-clickon the Applications menu button select Edit Menus scrolldown to SystemAdministration right-click on System Monitorand select Properties This pops up a window whoseCommand field shows the name gnome-system-monitor

Figure 1 shows the Launcher Properties not for theGNOME System Monitor but instead for the GNOME FileBrowser which is a better example because its launcherentry includes some command-line options Obviously allsuch options also can be used when starting X applicationsover SSH

Figure 1 Launcher Properties for the GNOME File Browser (Nautilus)

If this sounds like too much trouble or if yoursquore worriedabout accidentally messing up your desktop menus you simplycan run the application in question issue the command ps auxw in a terminal window and find the entry that corresponds to your application The last field in each rowof the output from ps is the executablersquos name plus thecommand-line options with which it was invoked

Once yoursquove finished running your remote X WindowSystem application you can close it the usual way (selectingExit from its File menu clicking the x button in the upperright-hand corner of its window and so forth) Donrsquot forget toclose your SSH session too by issuing the command exit inthe terminal window where yoursquore running SSH

Virtual Network Computing (VNC)Now that Irsquove shown you the preferred way to run remote XWindow System applications letrsquos discuss how to control anentire remote desktop In the LinuxUNIX world the mostpopular tool for this is Virtual Network Computing or VNC

Originally a project of the Olivetti Research Laboratory(which was subsequently acquired by Oracle and then ATampTbefore being shut down) VNC uses a protocol called RemoteFrame Buffer (RFB) The original creators of VNC now maintainthe application suite RealVNC which is available in free andcommercial versions but TightVNC UltraVNC and GNOMErsquosvino VNC server and vinagre VNC client also are popular

VNCrsquos strengths are its simplicity ubiquity and portabilitymdashit runs on many different operating systems Because it runsover a single TCP port (usually TCP 5900) itrsquos also firewall-friendly and easy to tunnel

Its security however is somewhat problematic VNCauthentication uses a DES-based transaction that if eaves-dropped-on is vulnerable to optimized brute-force (password-guessing) attacks This vulnerability is exacerbated by thefact that many versions of VNC support only eight-characterpasswords

Furthermore VNC session data usually is transmittedunencrypted Only a couple flavors of VNC support TLSencryption of RFB streams and it isnrsquot clear how securelyTLS has been implemented even in those Thus an attackerusing a trivially hacked VNC client may be able to reconstructand view eavesdropped VNC streams

www l inux journa l com | 1 1

But Donrsquot RealSysadmins Stick toTerminal SessionsIf yoursquove read my book or my past columns yoursquoveendured my repeated exhortations to keep the X WindowSystem off of Internet-facing servers or any other systemon which it isnrsquot needed due to Xrsquos complexity and histo-ry of security vulnerabilities So why am I devoting anentire column to graphical remote system administration

Donrsquot worry I havenrsquot gone soft-hearted (though possiblyslightly soft-headed) I stand by that advice But there areplenty of contexts in which you may need to administeror view things remotely besides hardened servers inInternet-facing DMZ networks

And not all people who need to run remote applicationsin those non-Internet-DMZ scenarios are advanced Linuxusers Should they be forbidden from doing what theyneed to do until theyrsquove mastered using the vi editor andwriting bash scripts Especially given that it is possible tomitigate some of the risks associated with the X WindowSystem and VNC

Of course they shouldnrsquot Although I do encourage all Linuxnewcomers to embrace the command line The day maycome when Linux is a truly graphically oriented operatingsystem like Mac OS but for now pretty much the entire OSis driven by configuration files in etc (and in usersrsquo homedirectories) and thatrsquos unlikely to change soon

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 12: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

Luckily as it operates over a single TCP port VNC is easy totunnel through SSH through Virtual Private Network (VPN)sessions and through TLSSSL wrappers such as stunnel Letrsquoslook at a simple VNC-over-SSH example

Tunneling VNC over SSHTo tunnel VNC over SSH your remote system must be runningan SSH daeligmon (sshd) and a VNC server application Yourlocal system must have an SSH client (ssh) and a VNCclient application

Our example remote system remotebox already is runningsshd Suppose it also has vino which is also known as theGNOME Remote Desktop Preferences applet (on Ubuntu sys-tems itrsquos located in the System menursquos Preferences section)First presumably from remoteboxrsquos local console you need toopen that applet and enable remote desktop sharing Figure 2shows vinorsquos General settings

If you want to view only this systemrsquos remote desktopwithout controlling it uncheck Allow other users to controlyour desktop If you donrsquot want to have to confirm remoteconnections explicitly (for example because you want toconnect to this system when itrsquos unattended) you canuncheck the Ask you for confirmation box Any time youleave yourself logged in to an unattended system be sureto use a password-protected screensaver

vino is limited in this way Because vino is loaded onlyafter you log in you can use it only to connect to a fullylogged-in GNOME session in progressmdashnot for exampleto a gdm (GNOME login) prompt Unlike vino other versionsof VNC can be invoked as needed from xinetd or inetdThat technique is out of the scope of this article but seeResources for a link to a how-to for doing so in Ubuntu or simply Google the string ldquovnc xinetdrdquo

Continuing with our vino example donrsquot close that RemoteDesktop Preferences applet yet Be sure to check theRequire the user to enter this password box and to select a difficult-to-guess password Remember vino runs in an

already-logged-in desktop session so unless you set apassword here yoursquoll run the risk of allowing completelyunauthenticated sessions (depending on whether a password-protected screensaver is running)

If your remote system will be run logged in but unattend-ed you probably will want to uncheck Ask you for confirma-tion Again donrsquot forget that locked screensaver

Wersquore not done setting up vino on remotebox yet Figure 3shows the Advanced Settings tab

Several advanced settings are particularly noteworthy Firstyou should check Only allow local connections after whichremote connections still will be possible but only via a port-forwarder like SSH or stunnel Second you may want to set acustom TCP port for vino to listen on via the Use an alternativeport option In this example Irsquove chosen 20226 This is aninstance of useful security-through-obscurity if our other

1 2 | www l inux journa l com

SYSTEM ADMINISTRATION

On the Web Articles Talk

Every couple weeks over at LinuxJournalcom our GadgetGuy Shawn Powers posts a video They are fun silly quirkyand sometimeseven useful Sowhether hesreviewing a newproduct orshowing how touse some Linuxsoftware besure to swingover to the Web site and check out the latest videowwwlinuxjournalcomvideo

Well see you there or more precisely vice versa

Figure 2 General Settings in GNOME Remote Desktop Preferences (vino)

Figure 3 Advanced Settings in GNOME Remote Desktop Preferences (vino)

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 13: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

(more meaningful) security controls fail at least we wonrsquot berunning VNC on the obvious default port

Also you should check the box next to Lock screen on discon-nect but you probably should not check Require encryption asSSH will provide our session encryption and adding redundant(and not-necessarily-known-good) encryption will only increasevinorsquos already-significant latency If you think therersquos only a moder-ate level of risk of eavesdropping in the environment in which youwant to use vinomdashfor example on a small private local-area net-work inaccessible from the Internetmdashvinorsquos TLS implementationmay be good enough for you In that case you may opt to checkthe Require encryption box and skip the SSH tunneling

(If any of you know more about TLS in vino than I was ableto divine from the Internet please write in During my researchfor this article I found no documentation or on-line discus-sions of vinorsquos TLS design details whatsoevermdashbeyond peoplecommenting that the soundness of TLS in vino is unknown)

This month I seem to be offering you more ldquoopt-outrdquoopportunities in my examples than usual Perhaps Irsquom becom-ing less dogmatic with age Regardless letrsquos assume yoursquove fol-lowed my advice to forego vinorsquos encryption in favor of SSH

At this point yoursquore done with the server-side settings Youwonrsquot have to change those again If you restart your GNOMEsession on remotebox vino will restart as well with the optionsyou just set The following steps however are necessary eachtime you want to initiate a VNCSSH session

On mylaptop (the system from which you want to connect toremotebox) open a terminal window and type this command

mickmylaptop~$ ssh -L 20226remotebox20226 admin-slaveremotebox

OpenSSHrsquos -L option sets up a local port-forwarder In thisexample connections to mylaptoprsquos TCP port 20226 will beforwarded to the same port on remotebox The syntax for thisoption is ldquo-L [localport][remote IP or hostname][remoteport]rdquoYou can use any available local TCP port you like (higher than1024 unless yoursquore running SSH as root) but the remote portmust correspond to the alternative port you set vino to listenon (20226 in our example) or if you didnrsquot set an alternativeport it should be VNCrsquos default of 5900

Thatrsquos it for the SSH session Yoursquoll be prompted for a pass-word and then given a bash prompt on remotebox But youwonrsquot need it except to enter exit when your VNC session isfinished Itrsquos time to fire up your VNC client

vinorsquos companion client vinagre (also known as theGNOME Remote Desktop Viewer) is good enough for our pur-poses here On Ubuntu systems itrsquos located in the Applicationsmenu in the Internet section In Figure 4 Irsquove opened theRemote Desktop Viewer and clicked the Connect button Asyou can see rather than remotebox Irsquove entered localhost asthe hostname Irsquove also entered port number 20226

When I click the Connect button vinagre connects tomylaptoprsquos local TCP port 20226 which actually is being listenedon by my local SSH process SSH forwards this connection attemptthrough its encrypted connection to TCP 20226 on remoteboxwhich is being listened on by remoteboxrsquos vino process

At this point Irsquom prompted for remoteboxrsquos vino password(shown in Figure 2) On successful authentication Irsquoll have fullaccess to my active desktop session on remotebox

To end the session I close the Remote Desktop Viewerrsquos

session and then enter exit in my SSH session to remote-boxmdashthatrsquos all there is to it

This technique is easy to adapt to other versions of VNCservers and clients and probably also to other versions ofSSHmdashthere are commercial alternatives to OpenSSH includingWindows versions I mentioned that VNC client applicationsare available for a wide variety of platforms on any suchplatform you can use this technique so long as you alsohave an SSH client that supports port forwarding

ConclusionThus ends my crash course on how to control graphical appli-cations over networks I hope my examples of both tech-niques SSH with X forwarding and VNC over SSH help youget started with whatever particular distribution and softwarepackages you prefer Be safe

Mick Bauer (darthelmowiremonkeysorg) is Network Security Architect for one of the USrsquoslargest banks He is the author of the OrsquoReilly book Linux Server Security 2nd edition (formerlycalled Building Secure Servers With Linux) an occasional presenter at information securityconferences and composer of the ldquoNetwork Engineering Polkardquo

wwwl inux journa l com | 1 3

Resources

The CygwinX (information about Cygwinrsquos free X server forMicrosoft Windows) xcygwincom

Tichondriusrsquo HOWTO for setting up VNC with resumablesessionsmdashUbuntu-centric but mostly applicable to otherdistributions ubuntuforumsorgshowthreadphpt=122402

Wikipediarsquos VNC article which may be helpful in making sense of the different flavors of VNCenwikipediaorgwikiVnc

Figure 4 Using vinagre to Connect to an SSH-Forwarded Local Port

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 14: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

1 4 | www l inux journa l com

Does anyone remember Linuxcare Founded in 1998Linuxcare was a company that provided support servicesfor Linux users in corporate environments I remember see-ing Linuxcare at the first ever LinuxWorld conference inSan Jose and the thing I took away from that LinuxWorldwas the Linuxcare Bootable Business Card (BBC) The BBCwas a 50MB cut-down Linux distribution that fit on a business-card-size compact disc I used that distribution torecover and repair quite a few machines until the adventof Knoppix I always loved the portability of that little CDthough and I missed it greatly until I stumbled acrossDamn Small Linux (DSL) one day

After reading through the DSL Web site I discovered that itwas possible to run DSL off of a bootable USB key and thatold love for the Bootable Business Card was rekindled in anew way It wasnrsquot until I had a conversation with fellowsysadmin Kyle Rankin about the PXE boot environment hersquodimplemented that I realized it might be possible to set up aUSB key to do more than merely boot a recovery environmentBefore long I had added the CentOS and Ubuntu netinstallsto my little USB key Not long after that I was mentioning this

in my favorite IRC channel and one of the fellows in theresuggested I put the code on SourceForge and call it Billix Irsquodhad a couple beers by then and thought it sounded like agreat idea In that instant Billix was born

Billix is an aggregation of many different tools that can beuseful to system administrators all compressed down to fitwithin a 256MB bootable USB thumbdrive The 256MB size isnot an arbitrary number rather it was chosen because USBthumbdrives are very inexpensive at that size (many companiesnow give them away as advertising gimmicks) This allows meto have many Billix keys lying around just waiting to be usedBecause the keys are cheap or free I donrsquot feel bad about

leaving one in a server for a day or two If your USB drive islarger than 256MB you still can use it for its designed pur-posemdashstoring files Billix doesnrsquot hamper normal use of theUSB drive in any way There also is an ISO distribution of Billixif you want to burn a CD of it but I feel itrsquos not nearly asconvenient as having it on a USB key

The current Billix distribution (021 at the time of thiswriting) includes the following tools

Damn Small Linux 425

Ubuntu 804 LTS netinstall

Ubuntu 710 netinstall

Ubuntu 606 LTS netinstall

Fedora 8 netinstall

CentOS 51 netinstall

CentOS 46 netinstall

Debian Etch netinstall

Debian Sarge netinstall

Memtest86 memory-checking utility

Ntpwd Windows password changing utility

DBAN disk wiper utility

So with one USB key a system administrator can recoveror repair a machine install one of eight different Linux distri-butions test the memory in a system get into a Windowsmachine with a lost password or wipe the disks of a machinebefore repurpose or disposal In order to install any of thenetinstall-based Linux distributions a working Internet con-nection with DHCP is required as the netinstall downloadsthe installation bits for each distribution on the fly fromInternet-based mirrors

Hopefully yoursquore excited to check out Billix You simply candownload the ISO version and burn it to a CD to get startedbut the full utility of Billix really shines when you install it on aUSB disk Before you install it on a USB disk you need to meetthe following prerequisites

Billix a Sysadminrsquos SwissArmy KnifeTurn that spare USB stick into a sysadminrsquos dream with Billix BILL CHILDERS

SYSTEM ADMINISTRATION

It wasnrsquot until I had a conversationwith fellow sysadmin Kyle Rankinabout the PXE boot environmenthersquod implemented that I realized itmight be possible to set up a USBkey to do more than merely boot arecovery environment

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 15: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

256MB or greater USB drive with FAT- or FAT32-basedfilesystem

Internet connection with DHCP (for netinstalls only notrequired for DSL Windows password removal or diskwiping with DBAN)

install-mbr (part of the mbr package on Ubuntu or Debianneeded for some USB drives)

syslinux (from the syslinux package on Ubuntu or Debianrequired to create the bootsector on the USB drive)

Your system must be capable of booting from USB devices(most have this ability if theyrsquore made after 2005)

To install the USB-based version of Billix first check yourdrive If that drive has the U3 Windows software on it youmay want to remove it to unlock all of the driversquos capacity (seethe Resources section for U3 removal utilities which are typi-cally Windows-based) Next if your USB drive has data on itback up the data I cannot stress this enough You will bemaking adjustments to the partition table of the USB drive sobacking up any data that already is on the key is criticalDownload the latest version of Billix from the Sourceforgenetproject page to your computer Once the download is com-plete untar the contents of the tarball to the root directory ofyour USB drive

Now that the contents of the tarball are on your USB driveyou need to install a Master Boot Record (MBR) on the driveand set a bootsector on the drive The Master Boot Recordneeds to be set up on the USB drive first Issue a install-mbr-p1 ltdevicegt (where ltdevicegt is your USB drive such asdevsdb) Warning make sure that you get the device of theUSB drive correct or you run the risk of messing up the MBRon your systemrsquos boot device The -p1 option tells install-mbrto set the first partition as active (thatrsquos the one that willcontain the bootsector)

Next the bootsector needs to be installed within the firstpartition Run syslinux -s ltdevicepartitiongt (whereltdevicepartitiongt is the device and partition of the USBdrive such as devsdb1) Warning much like installing theMBR installing the bootsector can be a dangerous operation ifyou run it on the wrong device so take care and double-checkyour command line before pressing the Enter key

Figure 1 The Billix Boot Menu

At this point your USB drive can be unmounted safelyand you can test it out by booting from the USB driveOnce your system successfully boots from the USB driveyou should see a menu similar to the one shown in Figure1 Simply choose the number for what you want to bootrun or install and that distribution will spring into actionIf you donrsquot select a number Damn Small Linux will bootautomatically after 30 seconds

Damn Small Linux is a miniature version of Knoppix (itactually has much of the automatic hardware-detection rou-tines of Knoppix in it) As such it makes an excellent rescueenvironment or it can be used as a quick ldquotrusted desktoprdquoin the event you need to ldquoborrowrdquo a friendrsquos computer todo something I have used DSL in the past to commandeer asystem temporarily at a cybercafeacute so I could log in to workand fix a sick server Irsquove even used DSL to boot and mounta corrupted Windows filesystem and I was able to savesome of the data DSL is fairly full-featured for its size andit comes with two window managers (JWM or Fluxbox) Itcan be configured to save its data back to the USB disk in apersistent fashion so you always can be sure you have yourcritical files with you and that itrsquos easily accessible

All the Linux distribution installations have one thing incommon they are all network-based installs Although this is agood thing for Billix as they take up very little space (around10MB for each distro) it can be a bad thing during installationas the installation time will vary with the speed of yourInternet connection There is one other upside to a network-based installation In many cases there is no need to update

www l inux journa l com | 1 5

TroubleshootingBillix

A few things can go wrong when converting a USB keyto run Billix (or any USB-based distribution) The mostcommon issue is for the USB drive to fail to boot the sys-tem This can be due to several things Older systemsoften split USB disk support into USB-Floppy emulationand USB-HDD emulation For Billix to work on these sys-tems USB-HDD needs to be enabled If your drive camewith the U3 Windows-based software vault this typicallyneeds to be disabled or removed prior to installing Billix

If yoursquore seeing ldquoMBR123rdquo or something similar in theupper-left corner but the system is hanging you have amisconfigured MBR Try install-mbr again and makesure to use the -p1 switch You will need to run syslinuxagain after running install-mbr If all else fails youprobably need to wipe the USB drive and begin againBack up the data on the USB drive then use fdisk tobuild a new partition table (make sure to set it as FATor FAT32) Use mkfsvfat (with the -F 32 switch if itrsquos aFAT32 filesystem) to build a new blank filesystem untarthe tarball again and run install-mbr and syslinux onthe newly defined filesystem

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 16: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

the newly installed operating system after installation becausethe OS bits that are downloaded are typically up to date Notethat when using the Red Hat-based installers (CentOS 46CentOS 51 and Fedora 8) the system may appear to hangduring the download of a file called minstg2img The systemprobably isnrsquot hanging itrsquos just downloading that file which is

fairly large (around 40MB) so it can take a while dependingon the speed of the mirror and the speed of your connectionTake care not to specify the USB disk accidentally as the installtarget for the distribution you are attempting to install

The memtest86 utility has been around for quite a few

years yet itrsquos a key tool for a sysadmin when faced with aflaky computer It does only one thing but it does it very wellit tests the RAM of a system very thoroughly Simply boot offthe USB drive select memtest from the menu and press Enterand memtest86 will load and begin testing the RAM of thesystem immediately At this point you can remove the USBdrive from the computer Itrsquos no longer needed as memtest86is very small and loads completely into memory on startup

The ntpwd Windows password ldquocrackingrdquo tool can bea controversial tool but it is included in the Billix distribu-tion because as a system administrator Irsquove been askedcountless times to get into Windows systems (or accountson Windows systems) where the password has been lost orforgotten The ntpwd utility can be a bit daunting as theUI is text-based and nearly nonexistent but it does a goodjob of mounting FAT32- or NTFS-based partitions editingthe SAM account database and saving those changes Besure to read all the messages that ntpwd displays and takecare to select the proper disk partition to edit Also takethe programrsquos advice and nullify a password rather thantrying to change it from within the interfacemdashzeroing thepassword works much more reliably

DBAN (otherwise known as Darikrsquos Boot and Nuke) is avery good ldquonuke it from orbitrdquo hard disk wiper It provides various levels of wipe from a basic ldquooverwrite the disk withzerosrdquo to a full DoD-certified multipass wipe Like memtest86DBAN is small and loads completely into memory so you canboot the utility remove the USB drive start a wipe and moveon to another system Irsquove used to this to wipe clean diskson systems before handing them over to a recycler or beforeselling a system

In closing Billix may not make you coffee in the morning oreradicate Windows from the face of the earth but having a USBkey in your pocket that offers you the functionality to do all ofthose tasks quickly and easily can make the life of a systemadministrator (or any Linux-oriented person) much easier

Bill Childers is an IT Manager in Silicon Valley where he lives with his wife and two children Heenjoys Linux far too much and probably should get more sun from time to time In his spare timehe does work with the Gilroy Garlic Festival but he does not smell like garlic

1 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Billix is an aggregation of many different tools that can be useful to system administrators all compressed down to fit within a256MB bootable USB thumbdrive

Expanding BillixItrsquos relatively easy to expand Billix to support other Linuxdistributions such as Knoppix or the Ubuntu live CDsCopy the contents of the Billix USB tarball to a directoryon your hard disk and download the distro you wantCopy the necessary kernel and initrd to the directorywhere you put the contents of the USB tarball takingcare to rename any files if there are files in that directorywith the same name Copy any compressed filesystemsthat your new distro may use to the USB drive (for exam-ple Knoppix has the KNOPPIX directory and Puppy Linuxuses PUP_XXXSFS) Then look at the boot configurationfor that distro (it should be in isolinuxcfg) Take thenecessary lines out of that file and put them in the Billixsyslinuxcfg file changing filenames as necessaryOptionally you can add a menu item to the bootmsg fileFinally run syslinux -s ltdevicegt and reboot yoursystem to test out your newly expanded Billix

I have a 2GB USB drive that has a ldquoSuper-Billixrdquo installa-tion that includes Knoppix and Ubuntu 804 An addedbonus of having the entire Ubuntu live CD in your pocketis that thanks to the speed of USB 20 you can installUbuntu in less than ten minutes which would be reallyuseful at an installfest There is good information oncreating Ubuntu-bootable USB drives available at thePendrive Linux Web site

Alternatively a really neat thing to do (but way beyondthe scope of this article) is to convert Billix into a network-boot (via Pre-Execution Environment or PXE) environmentIrsquove actually got a VMware virtual machine running Billixas a PXE boot server

Resources

Billix Project Page sourceforgenetprojectsbillix

Damn Small Linux wwwdamnsmalllinuxorg

DBAN Project Page dbansourceforgenet

Knoppix wwwknoppixnet

Pendrive Linux wwwpendrivelinuxcom

Syslinux syslinuxzytorcomindexphp

Pxelinux syslinuxzytorcompxephp

U3 Removal Software wwwu3comuninstall

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 17: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

wwwl inux journa l com | 1 7

If a tree falls in the woods and no one is there to hear itdoes it make a sound This the classic query designed to placeyour mind into the Zen-like state known as the silent mindWhether or not you want to hear a tree fall if you run a net-work you probably want to hear a server when it goes downMany organizations utilize the long-established SimpleNetwork Management Protocol (SNMP) as a way to monitortheir networks proactively and listen for things going down

At a rudimentary level SNMP requires only two items towork a management server and a managed device (ordevices) The management server pulls status and health infor-mation at regular intervals from the managed devices andstores the information in a table Managed devices use localSNMP agents to notify the management server when definedbehavior occurs (such as errors or ldquotrapsrdquo) which are stored inthe same table on the server The result is an accurate real-time reporting mechanism for outages However SNMP as aprotocol does not stipulate how the data in these tables is tobe presented and managed for the end user Thatrsquos where apromising new open-source network-monitoring softwarecalled Zenoss (pronounced Zeen-ohss) comes in

Available for most Linux distributions Zenoss builds on thebasic operation of SNMP and uses a comprehensive interfaceto manage even the largest and most diverse environmentThe Core version of Zenoss used in this article is freely avail-able under the GPLv2 An Enterprise version also is availablewith additional features and support In this article we installZenoss on a CentOS 51 system to observe its usefulness in anetwork-monitoring role From there we create a simulatedmultisystem server network using the following systems aFedora-based Postfix e-mail server an Ubuntu server runningApache and a Windows server running File and Print servicesTo conserve space only the CentOS installation is discussed indetail here For the managed systems only SNMP installationand configuration are covered

Building the Zenoss ServerBegin by selecting your hardware Zenoss lacks specific hard-ware requirements but it relies heavily MySQL so you can useMySQL requirements as a rough guideline I recommend usingthe fastest processor available 1GB of memory fast enoughhard disks to provide acceptable MySQL performance andGigabit Ethernet for the network I ran several test configura-tions and this configuration seemed adequate enough for amedium-size network (100+ nodesdevices) To keep configu-ration simple all firewalls and SELinux instances were disabledin the test environment If you use firewalls in your environ-ment open ports 161 (SNMP) 8080 (Zenoss Management

Page) and 514 (if you integrate syslog with Zenoss)Install CentOS 51 on the server using your own prefer-

ences I used a bare install with no X Window System ordesktop manager Assign a static IP address and any otherpertinent network information (DNS servers and so forth)After the OS install is complete install the following packagesusing the yum command below

yum install mysql mysql-server net-snmp net-snmp-utils gmp httpd

If the mysqld or the httpd service has not started afteryum installs it start it and set it to run for your configuredrunlevel Next download the latest Zenoss Core rpm fromSourceforgenet (213 at the time of this writing) and installit using rpm from the command line To start all the Zenoss-related daeligmons after the rpm has been installed type thefollowing at a command prompt

service zenoss start

Launch a Web browser from any machine and type the IPaddress of the Zenoss server using port 8080 (for examplehttp19216814268080) Log in to the site using thedefault account admin with a password of zenoss This bringsup the main dashboard The dashboard is a compartmental-ized view of the state of your managed devices If you donrsquotlike the default display you can arrange your dashboardany way you want using the various drop-down lists on theportlets (windows) I recommend setting the Production Statesportlet to display Production so we can see our test systemsafter they are added

Almost everything related to managed devices in Zenossrevolves around classes With classes you can create an infi-nite number of systems processes or service classifications tomonitor To begin adding devices we need to set our SNMPcommunity strings at the top-level Devices class SNMP com-munity strings are like passphrases used to authenticate trafficbetween devices If one device wants to communicate withanother they must have matching community namesstringsIn many deployments administrators use the default commu-nity name of public (andor private) which creates a securityrisk I recommend changing these strings and making theminto a short phrase You can add numbers and characters tomake the community name more complex to guesscrack butI find phrases easier to remember

Click on the Devices link on the navigation menu on theleft so that Devices is listed near the top of the page Clickon the zProperties tab and scroll down Enter an SNMP

Zenoss and the Art ofNetwork MonitoringIf a server goes down do you want to hear it JERAMIAH BOWLING

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 18: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

1 8 | www l inux journa l com

SYSTEM ADMINISTRATION

community string in the zSNMPCommunitiy field For our testenvironment I used the string whatsourclearanceclarence Youcan use different strings with different subclasses of systems orindividual systems but by setting it at the Devices class it willbe used for any subclasses unless it is overridden You alsocould list multiple strings in the zSNMPCommunities under theDevices class which allows you to define multiple strings forthe discovery process discussed later Make sure your commu-nity string (zSNMPCommunity) is in this list

Installing Net-SNMP on Linux ClientsNow letrsquos set up our Linux systems so they can talk to theZenoss server After installing and configuring the operatingsystems on our other Linux servers install the Net-SNMPpackage on each using the following command on theUbuntu server

sudo apt-get install snmpd

And on the Fedora server use

yum install net-snmp

Once the Net-SNMP packages are installed edit out anyother lines in the Access Control sections at the beginning ofthe etcsnmpsnmpdconf and add the following lines

secname source community

com2sec local localhost whatsourclearanceclarence

com2sec mynetwork 192168142024 whatsourclearanceclarence

groupname secmodel secname

group MyROGroup v1 local

group MyROGroup v1 mynetwork

group MyROGroup v2c local

group MyROGroup v2c mynetwork

inclexcl subtree mask

view all included 1 80

context secmodel seclevel prefix read write notif

access MyROGroup any noauth exact all none none

Do not edit out any lines beneath the last Access ControlSections Please note that the above is only a mildly restrictiveconfiguration Consult the snmpdconf file or the Net-SNMPdocumentation if you want to tighten access On the Ubuntuserver you also may have to change the following line in theetcsnmpdefault file to allow SNMP to bind to anything otherthan the local loopback address

SNMPDOPTS=-Lsd -Lf devnull -u snmp -I -smux -p varrunsnmpdpid

Installing SNMP on WindowsOn the Windows server access the AddRemove Programsutility from the Control Panel Click on the AddRemoveWindows Components button on the left Scroll down the list of Components check off Management andMonitoring Tools and click on the Details button CheckSimple Network Management Protocol in the list and clickOK to install Close the AddRemove window and go intothe Services console from Administrative Tools in theControl Panel Find the SNMP service in the list right-clickon it and click on Properties to bring up the service prop-erties tabs Click on the Traps tab and type in the commu-nity name In the list of Trap Destinations add the IPaddress of the Zenoss server Now click on the Securitytab and check off the Send authentication trap box enterthe community name and give it READ-ONLY rights ClickOK and restart the service

Return to the Zenoss management Web page Click theDevices link to go into the subclass of DevicesServersWindowsand on the zProperties tab enter the name of a domain adminaccount and password in the zWinUser and zWinPasswordfields This account gives Zenoss access to the WindowsManagement Instrumentation (WMI) on your Windows systems Make sure to click Save at the bottom of the pagebefore navigating away

Adding Devices into ZenossNow that our systems have SNMP we can add them intoZenoss Devices can be added individually or by scanning thenetwork Letrsquos do both To add our Ubuntu server into Zenossclick on the Add Device link under the Management naviga-tion section Enter the IP address of the server and the com-munity name Under Device Class Path set the selection toServerLinux You could add a variety of other hardware soft-ware and Zenoss information on this page before adding asystem but at a minimum an IP address name and communityname is required (Figure 1) Click the Add Device button andthe discovery process runs When the results are displayedclick on the link to the new device to access it

Figure 1 Adding a Device into Zenoss

Available for most Linux distributionsZenoss builds on the basic operationof SNMP and uses a comprehensiveinterface to manage even the largestand most diverse environment

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 19: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

wwwl inux journa l com | 1 9

To scan the network for devices click the Networks linkunder the Browse By section of the navigation menu If your network is not in the list add it using CIDR notation Onceadded check the box next to your network and use the drop-down arrow to click on the Select Discover Devices optionYou will see a similar results page as the one from beforeWhen complete click on the links at the bottom of the resultspage to access the new devices Any device found will beplaced in the Discovered class Because we should have discovered the Fedora server and the Windows server they should be moved to the DevicesServersLinux andDevicesServersWindows classes respectively This can bedone from each serverrsquos Status tab by using the main drop-down list and selecting ManagerarrChange Class

If all has gone well so far we have a functional SNMPmonitoring system that is able to monitor heartbeatavailability(Figure 2) and performance information (Figure 3) on our sys-tems You can customize other various Status and PerformanceMonitors to meet your needs but here we will use the defaultlocalhost monitors

Figure 2 The Zenoss Dashboard

Figure 3 Performance data is collected almost immediately after discovery

Creating Users and Setting E-Mail AlertsAt this point we can use the dashboard to monitor the man-aged devices but we will be notified only if we visit the site It would be much more helpful if we could receive alerts via e-mail To set up e-mail alerting we need to create a separateuser account as alerts do not work under the admin accountClick on the Setting link under the Management navigationsection Using the drop-down arrow on the menu select AddUser Enter a user name and e-mail address when promptedClick on the new user in the list to edit its properties Enter apassword for the new account and assign a role of ManagerClick Save at the bottom of the page Log out of Zenoss andlog back in with the new account Bring the settings pageback up and enter your SMTP server information After settingup SMTP we need to create an Alerting Rule for our new userClick on the Users tab and click on the account just created inthe list From the resulting page click on the Edit tab andenter the e-mail address to which you want alerts sent Nowgo to the Alerting Rules tab and create a new rule using thedrop-down arrow On the edit tab of the new Alerting Rulechange the Action to email Enabled to True and change theSeverity formula to gt= Warning (Figure 4) Click Save

Figure 4 Creating an Alert Rule

Figure 5 Zenoss alerts are sent fresh to your mailbox

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 20: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

2 0 | www l inux journa l com

SYSTEM ADMINISTRATION

The above rule sends alerts when any Production serverexperiences an event rated Warning or higher (Figure 5) Usinga filter you can create any number of rules and have themapply only to specific devices or groups of devices If you wantto limit your alerts by time to working hours for example usethe Schedule tab on the Alerting Rule to define a window Ifno schedule is specified (the default) the rule runs all the timeIn our rule only one user will be notified You also can creategroups of users from the Settings page so that multiplepeople are alerted or you could use a group e-mail addressin your user properties

Services and ProcessesWe can expand our view of the test systems by adding aprocess and a service for Zenoss to monitor When we refer to a process in Zenoss we mean an active program usually adaeligmon running on a managed device Zenoss uses regularexpressions to monitor processes

To monitor Postfix on the mail server first letrsquos define it as a process Navigate to the Processes page under theClasses section of the navigation menu Use the drop-downarrow next to OS Processes and click Add Process EnterPostfix as the process ID When you return to the previouspage click on the link to the new process On the edit tabof the process enter master in the Regex field Click Savebefore navigating away Go to the zProperties tab of the

process and make sure the zMonitor field is set to TrueClick Save again Navigate back to the mail server from thedashboard and on the OS tab use the topmost menursquosdrop-down arrow to select AddrarrAdd OSProcess After theprocess has been added we will be alerted if the Postfixprocess degrades or fails While still on the OS tab of the

Figure 6 Zenoss comes with a plethora of predefined Windows servicesto monitor

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 21: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

wwwl inux journa l com | 2 1

server place a check mark next to the new Postifx processand from the OS Processes drop-down menu select LockOSProcess On the next set of options select Lock fromdeletion This protects the process from being overwrittenif Zenoss remodels the server

Services in Zenoss are defined by active network portsinstead of running daeligmons There are a plethora of servicesbuilt in to the software and you can define your own if youwant to The built-in services are broken down into two cate-gories IPServices and WinServices IPservices use any port from1-65535 and include common network appsprotocols suchas SMTP (Port 25) DNS (53) and HTTP (80) WinServices areintended for specific use with Windows servers (Figure 6)

Adding a service is much simpler than adding a processbecause there are so many predefined in Zenoss To moni-tor the HTTP service on our Web server navigate to theserver from the dashboard Use the main menursquos drop-down arrow on the serverrsquos OS tab arrow and selectAddrarrAdd IPService Type HTTP in the Service Class FieldNotice that the field begins to prefill with matches as youtype the letters Select TCP as the protocol and click OKClick Save on the resulting page As with the OSProcessprocedure return to the OS tab of the server and lock thenew IPService Zenoss is now monitoring HTTP availabilityon the server (Figure 7)

Only the BeginningThere are a multitude of other features in Zenoss that spacehere prevents covering including Network Maps (Figure 8) aGoogle Maps API for multilocation monitoring (Figure 9) andZenpacks that provide additional monitoring and performance-capturing capabilities for common applications

In the span of this article we have deployed an enterprise-grade monitoring solution with relative ease Although itrsquossurprisingly easy to deploy Zenoss also possesses a deepfeature set It easily rivals if not surpasses commercialcompetitors in the same product space It is easy to managehighly customizable and supported by a vibrant community

Although you may not achieve the silent mind as long asyou work with networks with Zenoss at least you will be ableto sleep at night knowing you will hear things when they go

down Hopefully they wonrsquot be trees

Jeramiah Bowling has been a systems administrator and network engineer for more than tenyears He works for a regional accounting and auditing firm in Hunt Valley Maryland and holdsnumerous industry certifications including the CISSP Your comments are welcome atjb50cyahoocom

Resources

Zenoss wwwzenosscom

Zenoss SourceForge Downloads Page sourceforgenetprojectshowfilesphpgroup_id=163126

NET-SNMP net-snmpsourceforgenet

CentOS wwwcentosorg

CentOS 5 Mirrors isoredirectcentosorgcentos5isosi386

Figure 7 Monitoring HTTP as an IPService Figure 8 Zenoss automatically maps your network for you

Figure 9 Multiple sites can be monitored geographically with theGoogle Maps API

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 22: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

2 2 | www l inux journa l com

In the 1970s and 1980s the ubiquitous model of corporateand academic computing was that of many users logging inremotely to a single server to use a sliver of its precious pro-cessing time With the cost of semiconductors holding fast toMoorersquos Law in the subsequent decades however the nextadvances in computing saw desktop computing become thestandard as it became more affordable

Although the technology behind thin clients is not revolu-tionary their popularity has been on the increase recently Formany institutions that rely on older donated hardware thin-client networks are the only feasible way to provide users withaccess to relatively new software Their use also has flourishedin the corporate context Thin-client networks provide cost-savings ease network administration and pose fewer securityimplications when the time comes to dispose of them Severalcomputer manufacturers have leaped to stake their claim onthis expanding market Dell and HP Compaq among othersnow offer thin-client solutions to business clients

And of course thin clients have a large following of hob-byists and enthusiasts who have used their size and flexibilityto great effect in countless home-brew projects Softwareprojects such as the Etherboot Project and the LinuxTerminal Server Project have large and active communitiesand provide excellent support to those looking to experimentwith diskless workstations

Connecting the thin clients to a server always has beendone using Ethernet however things are changing Wirelesstechnologies such as Wi-Fi (IEEE 80211) have evolvedtremendously and now can start to provide an alternativemeans of connecting clients to servers Furthermore wirelessequipment enjoys world-wide acceptance and compatibleproducts are readily available and very cheap

In this article we give a short description of the setup of athin-client network as well as some of the tools we found tobe useful in its operation and administration We also describea test scenario we set up involving a thin-client network thatspanned a wireless bridge

What Is a Thin ClientA thin client is a computer with no local hard drive whichloads its operating system at boot time from a boot server It isdesigned to process data independently but relies solely on itsserver for administration applications and non-volatile storage

Following the clientrsquos BIOS sequence most machines withnetwork-boot capability will initiate a Preboot EXecutionEnvironment (PXE) which will pass system control to the localnetwork adapter Figure 1 illustrates the traffic profile of the

boot process and the various different stages which are num-bered 1 to 5 The network card broadcasts a DHCPDISCOVERpacket with special flags set indicating that the sender is try-ing to locate a valid boot server A local PXE server will replywith a list of valid boot servers The client then chooses a server requests the name of the Linux kernel file from theserver and initiates its transfer using Trivial File TransferProtocol (TFTP stage 1) The client then loads and executesthe Linux kernel as normal (stage 2) A custom init program isthen run which searches for a network card and uses DHCPto identify itself on the network Using Sun MicrosystemsrsquoNetwork File System (NFS) the thin client then mounts a direc-tory tree located on the PXE server as its own root filesystem(stage 3) Once the client has a non-volatile root filesystem itcontinues to load the rest of its operating system environment(stage 4)mdashfor example it can mount a local filesystem andcreate a ramdisk to store local copies of temporary filesThe fifth stage in the boot process is the initiation of the XWindow System This transfers the keystrokes from the thinclient to the server to be processed The server in return sendsthe graphical output to be displayed by the user interfacesystem (usually KDE or GNOME) on the thin client

The X Display Manager Control Protocol (XDMCP) providesa layer of abstraction between the hardware in a system andthe output shown to the user This allows the user to bephysically removed from the hardware by in this case aLocal Area Network When the X Window System is run onthe thin client it contacts the PXE server This means theuser logs in to the thin client to get a session on the server

In conventional fat-client environments if a client opens alarge file from a network server it must be transferred to theclient over the network If the client saves the file the filemust be transmitted over the network again In the case ofwireless networks where bandwidth is limited fat clientnetworks are highly inefficient On the other hand with a

Thin Clients Booting over aWireless BridgeHow quickly can thin clients boot over a wireless bridge and how far apart canthey really be RONAN SKEHILL ALAN DUNNE AND JOHN NELSON

SYSTEM ADMINISTRATION

Figure 1 LTSP Traffic Profile and Boot Sequence

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 23: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

thin-client network if the user modifies the large file only mousemovement keystrokes and screen updates are transmitted toand from the thin client This is a highly efficient means andother examples such as ICA or NX can consume as little as5kbps bandwidth This level of traffic is suitable for transmittingover wireless links

How to Set Up a Thin-Client Network with a Wireless BridgeOne of the requirements for a thin client is that it has aPXE-bootable system Normally PXE is part of your networkcard BIOS but if your card doesnrsquot support it you can getan ISO image of Etherboot with PXE support from ROM-o-matic(see Resources) Looking at the server with for example tenclients it should have plenty of hard disk space (100GB)plenty of RAM (at least 1GB) and a modern CPU (such as anAMD64 3200)

The following is a five-step how-to guide on setting up anEdubuntu thin-client network over a fixed network

1 Prepare the server In our network we used thestandard standalone configuration From the command line

sudo apt-get install ltsp-server-standalone

You may need to edit etcltspdhcpdconf if you changethe default IP range for the clients By default itrsquos configuredfor a server at 19216801 serving PXE clients

Our network wasnrsquot behind a firewall but if yours isyou need to open TFTP NFS and DHCP To do this editetchostsallow and limit access for portmap rpcmountdrpcstatd and intftpd to the local network

portmap 1921680024

rpcmountd 1921680024

rpcstatd 1921680024

intftpd 1921680024

Restart all the services by executing the following commands

sudo invoke-rcd nfs-kernel-server restart

sudo invoke-rcd nfs-common restart

sudo invoke-rcd portmap restart

2 Build the clientrsquos runtime environment While con-nected to the Internet issue the command

sudo ltsp-build-client

If yoursquore not connected to the Internet and have Edubuntuon CD use

sudo ltsp-build-client --mirror filecdrom

Remember to copy sourceslist from the server into the chroot

3 Configure your SSH keys To configure your SSHserver and keys do the following

sudo apt-get install openssh-server

sudo ltsp-update-sshkeys

4 Start DHCP You should now be ready to start yourDHCP server

sudo invoke-rcd dhcp3-server start

If all is going well you should be ready to start yourthin client

5 Boot the thin client Make sure the client is connectedto the same network as your server

Power on the client and if all goes well you should see anice XDMCP graphical login dialog

Once the thin-client network was up and running correctlywe added a wireless bridge into our network In our networka number of thin clients are located on a single hub which isseparated from the boot server by an IEEE 80211 wirelessbridge Itrsquos not an unrealistic scenario a situation such as thismay arise in a corporate setting or university For example ifa group of thin clients is located in a different or temporarybuilding that does not have access to the main network asimple and elegant solution would be to have a wireless linkbetween the clients and the server Here is a mini-guide ingetting the bridge running so that the clients can boot overthe bridge

Connect the server to the LAN port of the access pointUsing this LAN connection access the Web configurationinterface of the access point and configure it to broadcastan SSID on a unique channel Ensure that it is inInfrastructure mode (not ad hoc mode) Save these set-tings and disconnect the server from the access pointleaving it powered on

Now connect the server to the wireless node Using itsWeb interface connect to the wireless network advertisedby the access point Again make sure the node connects tothe access point in Infrastructure mode

Finally connect the thin client to the access point If thereare several thin clients connected to a single hub connectthe access point to this hub

We found ad hoc mode unsuitable for two reasons Firstmost wireless devices limit ad hoc connection speeds to11Mbps which would put the network under severe strain toboot even one client Second while in ad hoc mode the wire-less nodes we were using would assume the Media AccessControl (MAC) address of the computer that last accessed itsWeb interface (using Ethernet) as its own Wireless LAN MACThis made the nodes suitable for connecting a single computerto a wireless network but not for bridging traffic destinedto more than one machine This detail was found only aftermuch sleuthing and led to a range of sporadic and oftenunreproducible errors in our system

The wireless devices will form an Open SystemsInterconnection (OSI) layer 2 bridge between the server andthe thin clients In other words all packets received by thewireless devices on their Ethernet interfaces will be forwardedover the wireless network and retransmitted on the Ethernet

www l inux journa l com | 2 3

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 24: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

adapter of the other wireless device The bridge is transparentto both the clients and the server neither has any knowledgethat the bridge is in place

For administration of the thin clients and network we usedthe Webmin program Webmin comprises a Web front endand a number of CGI scripts which directly update systemconfiguration files As it is Web-based administration can beperformed from any part of the network by simply using aWeb browser to log in to the server The graphical interfacegreatly simplifies tasks such as adding and removing thinclients from the network or changing the location of theimage file to be transferred at boot time The alternative is toedit several configuration files by hand and restart all daeligmonprograms manually

Evaluating the Performance of a Thin-ClientNetworkThe boot process of a thin client is network-intensive but oncethe operating system has been transferred there is little trafficbetween the client and the server As the time required toboot a thin client is a good indicator of the overall usability ofthe network this is the metric we used in all our tests

Our testbed consisted of a 3GHz Pentium 4 with 1GB ofRAM as the PXE server We chose Edubuntu 510 for ourserver as this (and all newer versions of Edubuntu) comewith LTSP included We used six identical thin clients500MHz Pentium III machines with 512MB of RAMmdashplentyof processing power for our purposes

Figure 2 Six thin clients are connected to a hub and in turn this is con-nected to wireless bridge device On the other side of the bridge is theserver Both wireless devices are placed in the Azimuth chamber

When performing our tests it was important that theresults obtained were free from any external influence A largepart of this was making sure that the wireless bridge was notaffected by any other wireless networks cordless phonesoperating at 24GHz microwaves or any other sources ofRadio Frequency (RF) interference To this end we used theAzimuth 301w Test Chamber to house the wireless devices(see Resources) This ensures that any variations in boot timesare caused by random variables within the system itself

The Azimuth is a test platform for system-level testingof 80211 wireless networks It holds two wireless devices(in our case the devices making up our bridge) in separatechambers and provides an artificial medium between themcreating complete isolation from the external RF environ-ment The Azimuth can attenuate the medium between

the wireless devices and can convert the attenuation indecibels to an approximate distance between them Thisgives us the repeatability which is a rare thing in wirelessLAN evaluation A graphic representation of our testbed isshown in Figure 2

We tested the thin-client network extensively in threedifferent scenarios first when multiple clients are bootingsimultaneously over the network second booting a single thinclient over the network at varying distances which are simu-lated by altering the attenuation introduced by the chamber

2 4 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 A Boot Time Comparison of Fixed and Wireless Networks withan Increasing Number of Thin Clients

Figure 4 The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5 Boot Time in the Presence of Background Traffic

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 25: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

and third booting a single client whenthere is heavy background networktraffic between the server and theother clients on the network

ConclusionAs shown in Figure 3 a wired networkis much more suitable for a thin-clientnetwork The main limiting factor inusing an 80211g network is its lack ofavailable bandwidth Offering a maxi-mum data rate of 54Mbps (and actualtransfer speeds at less than half that)even an aging 100Mbps Ethernet easilyoutstrips 80211g When using an80211g bridge in a network such asthis one it is best to bear in mind itslimitations If your network containsmultiple clients try to stagger their bootprocedures if possible

Second as shown in Figure 4 keepthe bridge length to a minimum With80211g technology after a length of25 meters the boot time for a singleclient increases sharply soon hitting thethree-minute mark Finally our testshows as illustrated in Figure 5 heavybackground traffic (generated either byother clients booting or by externalsources) also has a major influence onthe clientsrsquo boot processes in a wirelessenvironment As the background trafficreaches 25 of our maximum through-put the boot times begin to soarHaving pointed out the limitations with80211g 80211n is on the horizonand it can offer data rates of 540Mbpswhich means these limitations could

soon cease to be an issueIn the meantime we can recom-

mend a couple ways to speed up theboot process First strip out theunneeded services from the thin clientsSecond fix the delay of NFS mountingin klibc and also try to start LDM asearly as possible in the boot processwhich means running it as the first servicein rc2d If you do not need system logsyou can remove syslogd completelyfrom the thin-client startup Finally itrsquosworth remembering that after a clienthas fully booted it requires very littlebandwidth and current wireless technol-ogy is more than capable of supportinga network of thin clients

AcknowledgementThis work was supported by theNational Communications NetworkResearch Centre a ScienceFoundation Ireland Project underGrant 03IN31396

Ronan Skehill works for the Wireless Access Research Centreat the University of Limerick Ireland as a Senior ResearcherThe Centre focuses on everything wireless-related and hasbeen growing steadily since its inception in 1999

Alan Dunne conducted his final-year project with the Centreunder the supervision of John Nelson He graduated in 2007with a degree in Computer Engineering and now works withEricsson Ireland as a Network Integration Engineer

John Nelson is a senior lecturer in the Department of Electronicand Computer Engineering at the University of LimerickHis interests include mobile and wireless communicationssoftware engineering and ambient assisted living

Resources

Linux Terminal Server Project (LTSP) ltspsourceforgenet

Ubuntu ThinClient How-To httpshelpubuntucomcommunityThinClientHowto

Azimuth WLAN Chamber wwwazimuthnet

ROM-o-matic rom-o-maticnet

Etherboot wwwetherbootorg

Wireshark wwwwiresharkorg

Webmin wwwwebmincom

Edubuntu wwwedubuntuorg

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 26: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

2 6 | www l inux journa l com

Itrsquos funny how automation evolves as system administratorsmanage larger numbers of servers When you manage only afew servers itrsquos fine to pop in an install CD and set optionsmanually As the number of servers grows you might realize it makes sense to set up a kickstart or FAI (Debianrsquos FullyAutomated Installer) environment to automate all thatmanual configuration at install time Now you boot theinstall CD type in a few boot arguments to point themachine to the kickstart server and go get a cup of coffeeas the machine installs

When the day comes that you have to install three orfour machines at once you either can burn extra CDs orinvestigate PXE boot The Preboot eXecution Environmentis an open standard developed by Intel to allow machinesto boot over a network instead of from local media suchas a floppy CD or hard drive Modern servers and newerlaptops and desktops with integrated NICs should supportPXE booting in the BIOSmdashin some cases itrsquos enabled bydefault and in other cases you need to go into your BIOSsettings to enable it

Because many modern servers these days offer built-in

remote power and remote terminals or otherwise areremotely accessible via serial console servers or networkedKVM if you have a PXE boot environment set up you canpower on remotely then boot and install a machine frommiles away

If you have never set up a PXE boot server before thefirst part of this article covers the steps to get your firstPXE server up and running If PXE booting is old hat toyou skip ahead to the section called PXE Menu MagicThere I cover how to configure boot menus when you PXEboot so instead of hunting down MAC addresses anddoing a lot of setup before an install you simply can bootselect your OS and you are off and running After that Idiscuss how to integrate rescue tools such as Knoppix andmemtest86+ into your PXE environment so they are avail-able to any machine that can boot from the network

PXE SetupYou need three main pieces of infrastructure for a PXEsetup a DHCP server a TFTP server and the syslinux soft-ware Both DHCP and TFTP can reside on the same serverWhen a system attempts to boot from the network theDHCP server gives it an IP address and then tells it theaddress for the TFTP server and the name of the bootstrapprogram to run The TFTP server then serves that filewhich in our case is a PXE-enabled syslinux binary Thatprogram runs on the booted machine and then can loadLinux kernels or other OS files that also are shared on theTFTP server over the network Once the kernel is loadedthe OS starts as normal and if you have configured a kick-start install correctly the install begins

Configure DHCPAny relatively new DHCP server will support PXE booting so if you donrsquot already have a DHCP server set up just use yourdistributionrsquos DHCP server package (possibly named dhcpddhcp3-server or something similar) Configuring DHCP to suityour network is somewhat beyond the scope of this articlebut many distributions ship a default configuration file thatshould provide a good place to start Once the DHCP server isinstalled edit the configuration file (often in etcdhcpdconf)and locate the subnet section (or each host section if youconfigured static IP assignment via DHCP and want thesehosts to PXE boot) and add two lines

next-server ip_of_pxe_server

filename pxelinux0

The next-server directive tells the host the IP address ofthe TFTP server and the filename directive tells it which fileto download and execute from that server Change thenext-server argument to match the IP address of your TFTPserver and keep filename set to pxelinux0 as that is thename of the syslinux PXE-enabled executable

In the subnet section you also need to add dynamic-bootpto the range directive Here is an example subnet section afterthe changes

subnet 10000 netmask 2552552550

range dynamic-bootp 1000200 1000220

next-server 10001

filename pxelinux0

PXE Magic Flexible NetworkBooting with MenusSet up a PXE server and then add menus to boot kickstart images rescue disks anddiagnostic tools all from the network KYLE RANKIN

SYSTEM ADMINISTRATION

When the day comes that you haveto install three or four machines atonce you either can burn extra CDsor investigate PXE boot

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 27: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

Install TFTPAfter the DHCP server is configured and running you areready to install TFTP The pxelinux executable requires aTFTP server that supports the tsize option and two goodchoices are either tftpd-hpa or atftp In many distributionsthese options already are packaged under these names sojust install your distributionrsquos package or otherwise followthe installation instructions from the projectrsquos official site

Depending on your TFTP package you might need to addan entry to etcinetdconf if it wasnrsquot already added for you

tftp dgram udp wait root usrsbinintftpd

usrsbinintftpd -s varlibtftpboot

As you can see in this example the -s option (used fortftpd-hpa) specified varlibtftpboot as the directory to containmy files but on some systems these files are commonly storedin tftpboot so see your etcinetdconf file and your tftpdman page and check on its conventions if you are unsureIf your distribution uses xinetd and doesnrsquot create a file inetcxinetdd for you create a file called etcxinetddtftpthat contains the following

default off

description The tftp server serves files using

the trivial file transfer protocol

The tftp protocol is often used to boot diskless

workstations download configuration files to network-aware

printers and to start the installation process for

some operating systems

service tftp

disable = no

socket_type = dgram

protocol = udp

wait = yes

user = root

server = usrsbinintftpd

server_args = -s varlibtftpboot

per_source = 11

cps = 100 2

flags = IPv4

As tftpd is part of inetd or xinetd you will not need tostart any service At most you might need to reload inetd orxinetd however make sure that any software firewall youhave running allows the TFTP port (port 69 udp) as input

Add SyslinuxNow that TFTP is set up all that is left to do is to installthe syslinux package (available for most distributions oryou can follow the installation instructions from the pro-jectrsquos main Web page) copy the supplied pxelinux0 file to varlibtftpboot (or your TFTP directory) and then createa varlibtftpbootpxelinuxcfg directory to hold pxelinuxconfiguration files

PXE Menu MagicYou can configure pxelinux with or without menus and many

administrators use pxelinux without them There are com-pelling reasons to use pxelinux menus which I discuss belowbut first herersquos how some pxelinux setups are configured

When many people configure pxelinux they create con-figuration files for a machine or class of machines based onthe fact that when pxelinux loads it searches the pxelinuxcfgdirectory on the TFTP server for configuration files in thefollowing order

Files named 01-MACADDRESS with hyphens in betweeneach hex pair So for a server with a MAC address of8899AABBCCDD a configuration file that would targetonly that machine would be named 01-88-99-aa-bb-cc-dd(and Irsquove noticed it does matter that it is lowercase)

Files named after the hostrsquos IP address in hex Herepxelinux will drop a digit from the end of the hex IP andtry again as each file search fails This is often usedwhen an administrator buys a lot of the same brand ofmachine which often will have very similar MACaddresses The administrator then can configure DHCPto assign a certain IP range to those MAC addressesThen a boot option can be applied to all of that group

Finally if no specific files can be found pxelinux will lookfor a file named default and use it

One nice feature of pxelinux is that it uses the same syntaxas syslinux so porting over a configuration from a CD forinstance can start with the syslinux options and follow withyour custom network options Here is an example configura-tion for an old CentOS 36 kickstart

default linux

label linux

kernel vmlinuz-centos-36

append text nofb load_ramdisk=1 initrd=initrd-centos-36img

network ks=http10001kickstartcentos3cfg

Why Use MenusThe standard sort of pxelinux setup works fine and manyadministrators use it but one of the annoying aspects of itis that even if you know you want to install say CentOS36 on a server you first have to get the MAC address Soyou either go to the machine and find a sticker that liststhe MAC address boot the machine into the BIOS to readthe MAC or let it get a lease on the network Then youneed to create either a custom configuration file for thathostrsquos MAC or make sure its MAC is part of a group youalready have configured Depending on your infrastructurethis step can add substantial time to each server Even ifyou buy servers in batches and group in IP ranges what

www l inux journa l com | 2 7

You need three main pieces ofinfrastructure for a PXE setup aDHCP server a TFTP server andthe syslinux software

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 28: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

happens if you want to install a different OS on one of theservers You then have to go through the additional workof tracking down the MAC to set up an exclusion

With pxelinux menus I can preconfigure any of the dif-ferent network boot scenarios I need and assign a numberto them Then when a machine boots I get an ASCII menuI can customize that lists all of these options and theirnumber Then I can select the option I want press Enterand the install is off and running Beyond that now I havethe option of adding non-kickstart images and can makethem available to all of my servers not just certain groups

With this feature you can make rescue tools like Knoppixand memtest86+ available to any machine on the networkthat can PXE boot You even can set a timeout like withboot CDs that will select a default option I use this toselect my standard Knoppix rescue mode after 30 seconds

Configure PXE MenusBecause pxelinux shares the syntax of syslinux if you have anyCDs that have fancy syslinux menus you can refer to themfor examples Because you want to make this available toall hosts move any more specific configuration files out ofpxelinuxcfg and create a file named default When thepxelinux program fails to find any more specific files it thenwill load this configuration Here is a sample menu configu-ration with two options the first boots Knoppix over thenetwork and the second boots a CentOS 45 kickstart

default 1

timeout 300

prompt 1

display f1msg

F1 f1msg

F2 f2msg

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511

nodhcp lang=us ramdisk_size=100000 init=etcinit

2 apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

label 2

kernel vmlinuz-centos-45-64

append text nofb ksdevice=eth0 load_ramdisk=1

initrd=initrd-centos-45-64img network

ks=http10001kickstartcentos4-64cfg

Each of these options is documented in the syslinuxman page but I highlight a few here The default optionsets which label to boot when the timeout expires Thetimeout is in tenths of a second so in this example the

timeout is 30 seconds after which it will boot using theoptions set under label 1 The display option lists a mes-sage if there are any to display by default so if you wantto display a fancy menu for these two options you couldcreate a file called f1msg in varlibtftpboot that containssomething like

----| Boot Options |-----

| |

| 1 Knoppix 511 |

| 2 CentOS 45 64 bit |

| |

-------------------------

ltF1gt Main | ltF2gt Help

Default image will boot in 30 seconds

Notice that I listed F1 and F2 in the menu You can create multiple files that will be output to the screen whenthe user presses the function keys This can be useful ifyou have more menu options than can fit on a singlescreen or if you want to provide extra documentation at boot time (this is handy if you are like me and createcustom boot arguments for your kickstart servers) In thisexample I could create a varlibtftpbootf2msg file andadd a short help file

Although this menu is rather basic check out the syslinuxconfiguration file and project page for examples of how tojazz it up with color and even custom graphics

Extra Features PXE Rescue DiskOne of my favorite features of a PXE server is the additionof a Knoppix rescue disk Now whenever I need to recovera machine I donrsquot need to hunt around for a disk I canjust boot the server off the network

First get a Knoppix disk I use a Knoppix 511 CD for thisexample but Irsquove been successful with much older KnoppixCDs Mount the CD-ROM and then go to the bootisolinuxdirectory on the CD Copy the minirootgz and vmlinuz files toyour varlibtftpboot directory except rename them somethingdistinct such as miniroot-knx511gz and vmlinuz-knx511respectively Now edit your pxelinuxcfgdefault file and addlines like the one I used above in my example

label 1

kernel vmlinuz-knx511

append secure nfsdir=10001mntknoppix511 nodhcp

lang=us ramdisk_size=100000 init=etcinit 2

apm=power-off nomce vga=normal

initrd=miniroot-knx511gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1 so if you already have alabel with that name you need to decide which of the twoto rename Also notice that this example references therenamed vmlinuz-knx511 and miniroot-knx511gz filesIf you named your files something else be sure to changethe names here as well Because I am mostly dealing withservers I added 2 after init=etcinit on the append line soit would boot into runlevel 2 (console-only mode) If youwant to boot to a full graphical environment remove 2

2 8 | www l inux journa l com

SYSTEM ADMINISTRATION

With pxelinux menus I can preconfigure any of the differentnetwork boot scenarios I needand assign a number to them

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 29: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

from the append lineThe final step might be the largest

for you if you donrsquot have an NFS server set up For Knoppix to bootover the network you have to haveits CD contents shared on an NFSserver NFS server configuration isbeyond the scope of this article butin my example I set up an NFS shareon 10001 at mntknoppix511 Ithen mounted my Knoppix CD andcopied the full contents to that direc-tory Alternatively you could mount aKnoppix CD or ISO directly to thatdirectory When the Knoppix kernelboots it will then mount that NFSshare and access the rest of the filesit needs directly over the network

Extra Features Memtest86+Another nice addition to a PXE envi-ronment is the memtest86+ programThis program does a thorough scan of a systemrsquos RAM and reports anyerrors These days some distributionseven install it by default and make it available during the boot processbecause it is so useful Compared to Knoppix it is very simple to addmemtest86+ to your PXE serverbecause it runs from a singlebootable file First install your distributionrsquos memtest86+ package(most make it available) or otherwisedownload it from the memtest86+site Then copy the program binary to varlibtftpbootmemtestFinally add a new label to yourpxelinuxcfgdefault file

label 3

kernel memtest

Thatrsquos it When you type 3 at the boot prompt the memtest86+program loads over the network and starts the scan

ConclusionThere are a number of extra featuresbeyond the ones I give here Forinstance a number of DOS boot flop-py images such as Peter Nordahlrsquos NTPassword and Registry Editor BootDisk can be added to a PXE environ-ment My own use of the pxelinuxmenu helps me streamline server kick-starts and makes it simple to kickstartmany servers all at the same time At boot time I can not only indicate

which OS to load but also more specific options such as the type ofserver (Web database and so forth)to install what hostname to use andother very specific tweaks Besidesthe benefit of no longer trackingdown MAC addresses you also cancreate a nice colorful user-friendlyboot menu that can be documentedso itrsquos simpler for new administratorsto pick up Finally Irsquove been able

to customize Knoppix disks so thatthey do very specific things at bootsuch as perform load tests or even set up a Webcam servermdashall from the network

Kyle Rankin is a Senior Systems Administrator in the SanFrancisco Bay Area and the author of a number of booksincluding Knoppix Hacks and Ubuntu Hacks for OrsquoReillyMedia He is currently the president of the North Bay LinuxUsersrsquo Group

wwwl inux journa l com | 2 9

We are all very excited to let

you know that LinuxJournalcom

is optimized for mobile viewing

You can enjoy all of our

news blogs and articles from

anywhere you can find a data

connection on your phone or

mobile device

We know you find it diffi-

cult to be separated from your

Linux Journal so now you can

take LinuxJournalcom every-

where Need to read that

latest shell script trick right

now You got it

Go to mlinuxjournalcom

to enjoy this new experience

and be sure to let us know how

it works for you

mdash K AT H E R I N E D R U C K M A N

New LinuxJournalcom Mobile

Resources

tftp-hpa wwwkernelorgpubsoftwarenetworktftp

atftp ftpmamalinuxcompubatftp

Syslinux PXE Page syslinuxzytorcompxephp

Red Hatrsquos Kickstart Guide wwwredhatcomdocsmanualsenterpriseRHEL-4-Manualsysadmin-guidech-kickstart2html

Knoppix wwwknoppixorg

Memtest86+ wwwmemtestorg

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 30: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

3 0 | www l inux journa l com

VPN (Virtual Private Network) is a technology that providessecure communication through an insecure and untrusted network (like the Internet) Usually it achieves this by authenti-cation encryption compression and tunneling Tunneling is atechnique that encapsulates the packet header and data ofone protocol inside the payload field of another protocol Thisway an encapsulated packet can traverse through networks itotherwise would not be capable of traversing

Currently the two most common techniques for creatingVPNs are IPsec and SSLTLS In this article I describe the fea-tures and characteristics of these two techniques and presenttwo short examples of how to create IPsec and SSLTLS tunnelsin Linux and verify that the tunnels started correctly I also provide a short comparison of these two techniques

IPsec and OpenswanIPsec (IP security) provides encryption authentication andcompression at the network level IPsec is actually a suite ofprotocols developed by the IETF (Internet Engineering TaskForce) which have existed for a long time The first IPsecprotocols were defined in 1995 (RFCs 1825ndash1829) Later in1998 these RFCs were depreciated by RFCs 2401ndash2412 IPsecimplementation in the 26 Linux kernel was written by DaveMiller and Alexey Kuznetsov It handles both IPv4 and IPv6IPsec operates at layer 3 the network layer in the OSIseven-layer networking model IPsec is mandatory in IPv6 andoptional in IPv4 To implement IPsec two new protocols wereadded Authentication Header (AH) and Encapsulating Security

Payload (ESP) Handshaking and exchanging session keys aredone with the Internet Key Exchange (IKE) protocol

The AH protocol (RFC 2404) has protocol number 51 andit authenticates both the header and payload The AH protocoldoes not use encryption so it is almost never used

ESP has protocol number 50 It enables us to add asecurity policy to the packet and encrypt it though encryption is not mandatory Encryption is done by the kernel using the kernel CryptoAPI When two machinesare connected using the ESP protocol a unique numberidentifies this connection this number is called SPI(Security Parameter Index) Each packet that flows betweenthese machines has a Sequence Number (SN) starting with 0 This SN is increased by one for each sent packetEach packet also has a checksum which is called the ICV(integrity check value) of the packet This checksum is calculated using a secret key which is known only to thesetwo machines

IPsec has two modes transport mode and tunnel modeWhen creating a VPN we use tunnel mode This means eachIP packet is fully encapsulated in a newly created IPsec packetThe payload of this newly created IPsec packet is the originalIP packet

Figure 2 shows that a new IP header was added at theright as a result of working with a tunnel and that an ESPheader also was added

There is a problem when the endpoints (which are some-times called peers) of the tunnel are behind a NAT (Network

Creating VPNs with IPsecand SSLTLSHow to create IPsec and SSLTLS tunnels in Linux RAMI ROSEN

SYSTEM ADMINISTRATION

Figure 1 A Basic VPN Tunnel

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 31: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

Address Translation) device Using NAT is a method of con-necting multiple machines that have an ldquointernal addressrdquowhich are not accessible directly to the outside world Thesemachines access the outside world through a machine thatdoes have an Internet address the NAT is performed on thismachinemdashusually a gateway

When the endpoints of the tunnel are behind a NAT theNAT modifies the contents of the IP packet As a result thispacket will be rejected by the peer because the signature iswrong Thus the IETF issued some RFCs that try to find asolution for this problem This solution commonly is knownas NAT-T or NAT Traversal NAT-T works by encapsulatingIPsec packets in UDP packets so that these packets will beable to pass through NAT routers without being droppedRFC 3948 UDP Encapsulation of IPsec ESP Packets dealswith NAT-T (see Resources)

Openswan is an open-source project that provides animplementation of user tools for Linux IPsec You can create aVPN using Openswan tools (shown in the short examplebelow) The Openswan Project was started in 2003 by formerFreeSWAN developers FreeSWAN is the predecessor ofOpenswan SWAN stands for Secure Wide Area Networkwhich is actually a trademark of RSA Openswan runs on manydifferent platforms including x86 x86_64 ia64 MIPS andARM It supports kernels 20 22 24 and 26

Two IPsec kernel stacks are currently available KLIPSand NETKEY The Linux kernel NETKEY code is a rewritefrom scratch of the KAME IPsec code The KAME Projectwas a group effort of six companies in Japan to provide afree IPv6 and IPsec (for both IPv4 and IPv6) protocol stackimplementation for variants of the BSD UNIX computeroperating system

KLIPS is not a part of the Linux kernel When using KLIPSyou must apply a patch to the kernel to support NAT-T Whenusing NETKEY NAT-T support is already inside the kernel andthere is no need to patch the kernel

When you apply firewall (iptables) rules KLIPS is the easiercase because with KLIPS you can identify IPsec traffic as thistraffic goes through ipsecX interfaces You apply iptables rulesto these interfaces in the same way you apply rules to othernetwork interfaces (such as eth0)

When using NETKEY applying firewall (iptables) rules ismuch more complex as the traffic does not flow throughipsecX interfaces one solution can be marking the packetsin the Linux kernel with iptables (with a setmark iptablesrule) This mark is a member of the kernel socket bufferstructure (struct sk_buff from the Linux kernel networkingcode) decryption of the packet does not modify that mark

Openswan supports Opportunistic Encryption (OE) whichenables the creation of IPsec-based VPNs by advertising andfetching public keys from a DNS server

OpenVPNOpenVPN is an open-source project founded by James YonanIt provides a VPN solution based on SSLTLS Transport LayerSecurity (TLS) and its predecessor Secure Sockets Layer (SSL)are cryptographic protocols that provide secure communica-tions data transfer on the Internet SSL has been in existencesince the early rsquo90s

The OpenVPN networking model is based on TUNTAPvirtual devices TUNTAP is part of the Linux kernel The first

TUN driver in Linux was developed by Maxim KrasnyanskyOpenVPN installation and configuration is simpler in com-

parison with IPsec OpenVPN supports RSA authenticationDiffie-Hellman key agreement HMAC-SHA1 integrity checksand more When running in server mode it supports multipleclients (up tp 128) to connect to a VPN server over the sameport You can set up your own Certificate Authority (CA) andgenerate certificates and keys for an OpenVPN server andmultiple clients

OpenVPN operates in user-space mode this makes it easyto port OpenVPN to other operating systems

www l inux journa l com | 3 1

Figure 2 An IPsec Tunnel ESP Packet

When you apply firewall (iptables)rules KLIPS is the easier casebecause with KLIPS you canidentify IPsec traffic as this trafficgoes through ipsecX interfaces

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 32: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

Example Setting Up a VPN Tunnel withIPsec and OpenswanFirst download and install the ipsec-tools package and theOpenswan package (most distros have these packages)

The VPN tunnel has two participants on its ends called leftand right and which participant is considered left or right isarbitrary You have to configure various parameters for thesetwo ends in etcipsecconf (see man 5 ipsecconf) Theetcipsecconf file is divided into sections The conn sectioncontains a connection specification defining a networkconnection to be made using IPsec

An example of a conn section in etcipsecconf whichdefines a tunnel between two nodes on the same LANwith the left one as 192168089 and the right one as192168092 is as follows

conn linux-to-linux

Simply use raw RSA keys

After starting openswan run

ipsec showhostkey --left (or --right)

and fill in the connection similarly

to the example below

left=192168089

leftrsasigkey=0sAQPP

The remote user

right=192168092

rightrsasigkey=0sAQON

type=tunnel

auto=start

You can generate the leftrsasigkey and rightrsasigkey onboth participants by running

ipsec rsasigkey --verbose 2048 gt rsakey

Then copy and paste the contents of rsakey intoetcipsecsecrets

In some cases IPsec clients are roaming clients (with a random IP address) This happens typically when theclient is a laptop used from remote locations (such clientsare called Roadwarriors) In this case use the following in ipsecconf

right=any

instead of

right=ipAddress

The any keyword is used to specify an unknown IP addressThe type parameter of the connection in this example is

tunnel (which is the default) Other types can be transportsignifying host-to-host transport mode passthrough signi-fying that no IPsec processing should be done at all dropsignifying that packets should be discarded and reject signifying that packets should be discarded and a diagnostic

ICMP should be returnedThe auto parameter of the connection tells which

operation should be done automatically at IPsec startupFor example auto=start tells it to load and initiate the connection whereas auto=ignore (which is the default)signifies no automatic startup operation Other values forthe auto parameter can be add manual or route

After configuring etcipsecconf start the service with

service ipsec start

You can perform a series of checks to get info about IPsecon your machine by typing ipsec verify And output ofipsec verify might look like this

Checking your system to see if IPsec has installed and started correctly

Version check and ipsec on-path [OK]

Linux Openswan U247K2621-rc7 (netkey)

Checking for IPsec support in kernel [OK]

NETKEY detected testing for disabled ICMP send_redirects [OK]

NETKEY detected testing for disabled ICMP accept_redirects [OK]

Checking for RSA private key (etcipsecdhostkeysecrets) [OK]

Checking that pluto is running [OK]

Checking for ip command [OK]

Checking for iptables command [OK]

Opportunistic Encryption Support [DISABLED]

You can get information about the tunnel you createdby running

ipsec auto --status

You also can view various low-level IPSec messages in thekernel syslog

You can test and verify that the packets flowing betweenthe two participants are indeed esp frames by opening anFTP connection (for example) between the two participantsand running

tcpdump -f esp

tcpdump verbose output suppressed use -v or -vv for full protocol decode

listening on eth0 link-type EN10MB (Ethernet) capture size 96 bytes

You should see something like this

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x7)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x6)

IP 192168089 gt 192168092 ESP(spi=0x3a1563b9seq=0x7)

IP 192168092 gt 192168089 ESP(spi=0xd514eed9seq=0x8)

Note that the spi (Security Parameter Index) header is thesame for all packets this is an identifier of the connection

If you need to support NAT traversal addnat_traversal=yes in ipsecconf nat_traversal=nois the default

The Linux IPsec stack can work with pluto from Openswanracoon from the KAME Project (which is included in ipsec-tools)or isakmpd from OpenBSD

3 2 | www l inux journa l com

SYSTEM ADMINISTRATION

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 33: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

Example Setting Up a VPN Tunnel with OpenVPNFirst download and install the OpenVPN package (most distroshave this package)

Then create a shared key by doing the following

openvpn --genkey --secret statickey

You can create this key on the server side or the client sidebut you should copy this key to the other side in a securedchannel (like SSH for example) This key is exchangedbetween client and server when the tunnel is created

This type of shared key is the simplest key you also canuse CA-based keys The CA can be on a different machinefrom the OpenVPN server The OpenVPN HOWTO providesmore details on this (see Resources)

Then create a server configuration file named serverconf

dev tun

ifconfig 10001 10002

secret statickey

comp-lzo

On the client side create the following configuration filenamed clientconf

remote serverIpAddressOrHostName

dev tun

ifconfig 10002 10001

secret statickey

comp-lzo

Note that the order of IP addresses has changed in theclientconf configuration file

The comp-lzo directive enables compression on the VPN link

You can set the mtu of the tunnel by adding the tun-mtudirective When using Ethernet bridging you should use devtap instead of dev tun

The default port for the tunnel is UDP port 1194 (you canverify this by typing netstat -nl | grep 1194 after startingthe tunnel)

Before you start the VPN make sure that the TUN interface(or TAP interface in case you use Ethernet bridging) is not firewalled

Start the vpn on the server by running openvpn serverconfand running openvpn clientconf on the client

You will get output like this on the client

OpenVPN 21_rc2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] built on

Mar 3 2007

IMPORTANT OpenVPNs default port number is now 1194 based on an official

port number assignment by IANA OpenVPN 20-beta16 and earlier used 5000

as

the default port

LZO compression initialized

TUNTAP device tun0 opened

sbinip link set dev tun0 up mtu 1500

sbinip addr add dev tun0 local 10002 peer 10001

UDPv4 link local (bound) [undef]1194

UDPv4 link remote 1921680891194

Peer Connection Initiated with 1921680891194

Initialization Sequence Completed

You can verify that the tunnel is up by pinging the serverfrom the client (ping 10001 from the client)

The TUN interface emulates a PPP (Point-to-Point) networkdevice and the TAP emulates an Ethernet device A user-spaceprogram can open a TUN device and can read or write toit You can apply iptables rules to a TUNTAP virtual devicein the same way you would do it to an Ethernet device(such as eth0)

IPsec and OpenVPNmdasha Short ComparisonIPsec is considered the standard for VPN many vendors (includingCisco Nortel CheckPoint and many more) manufacture deviceswith built-in IPsec functionalities which enable them to connectto other IPsec clients

However we should be a bit cautious here different manu-facturers may implement IPsec in a noncompatible manner ontheir devices which can pose a problem

OpenVPN is not supported currently by most vendorsIPsec is much more complex than OpenVPN and

involves kernel code this makes porting IPsec to otheroperating systems a much heavier task It is much easier to port OpenVPN to other operating systems than IPsecbecause OpenVPN runs entirely in user space and is notinvolved with kernel code

Both IPsec and OpenVPN use HMAC (Hash MessageAuthentication Code) to authenticate packets

OpenVPN is based on using the OpenSSL library it can runover UDP (which is the default and preferred protocol) or TCPAs opposed to IPsec which runs in kernel it runs in userspace so it is heavier than IPsec in terms of performance

Configuring and applying firewall (iptables) rules inOpenVPN is usually easier than configuring such rules withOpenswan in an IPsec-based tunnel

AcknowledgementThanks to Mr Ken Bantoft for his comments

Rami Rosen is a computer science graduate of Technion the Israel Institute of Technology locatedin Haifa He works as a Linux and Open Solaris kernel programmer for a networking startup andhe can be reached at ramirosegmailcom In his spare time he likes running solving crypticpuzzles and helping everyone he knows move to this wonderful operating system Linux

wwwl inux journa l com | 3 3

Resources

OpenVPN openvpnnet

OpenVPN 20 HOWTO openvpnnethowtohtml

RFC 3948 UDP Encapsulation of IPsec ESP Packetstoolsietforghtmlrfc3948

Openswan wwwopenswanorg

The KAME Project wwwkamenet

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 34: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

3 4 | www l inux journa l com

Stored procedures (or stored routines to use the officialMySQL terminology) are programs that are both stored andexecuted within the database server Stored procedures havebeen features in closed-source relational databases such asOracle since the early 1990s However MySQL added storedprocedure support only in the recent 50 release and conse-quently applications built on the LAMP stack donrsquot generallyincorporate stored procedures So this is an opportune time toconsider whether stored procedures should be incorporatedinto your MySQL applications

Stored Procedures in the Client-Server EraDatabase stored programs first came to prominence in the late1980s and early 1990s during the client-server revolution Inthe client-server applications of that time stored programs hadsome obvious advantages

Client-server applications typically had to balance processingload carefully between the client PC and the (relatively)more powerful server machine Using stored programswas one way to reduce the load on the client whichmight otherwise be overloaded

Network bandwidth was often a serious constraint onclient-server applications execution of multiple server-side operations in a single stored program could reducenetwork traffic

Maintaining correct versions of client software in a client-server environment was often problematic Centralizing atleast some of the processing on the server allowed agreater measure of control over core logic

Stored programs offered clear security advantages becausein those days application users typically connected directlyto the database rather than through a middle tier As Idiscuss later in this article stored procedures allow you torestrict the database account only to well-defined procedurecalls rather than allowing the account to execute any andall SQL statements

With the emergence of three-tier architectures and Webapplications some of the incentives to use stored programsfrom within applications disappeared Application clients arenow often browser-based security is predominantly handledby a middle tier and the middle tier possesses the ability toencapsulate business logic Most of the functions for which

stored programs were used in client-server applicationsnow can be implemented in middle-tier code (PHP JavaC and so on)

Nevertheless many of the traditional advantages of storedprocedures remain so letrsquos consider these advantages andsome disadvantages in more depth

Using Stored Procedures to Enhance DatabaseSecurityStored procedures are subject to most of the security restric-tions that apply to other database objects tables indexesviews and so forth Specific permissions are required beforea user can create a stored program and similarly specificpermissions are needed in order to execute a program

What sets the stored program security model apart fromthat of other database objectsmdashand from other programminglanguagesmdashis that stored programs may execute with thepermissions of the user who created the stored procedurerather than those of the user who is executing the storedprocedure This model allows users to perform actions viaa stored procedure that they would not be authorized toperform using normal SQL

This facility sometimes called definer rights security allowsus to tighten our database security because we can ensurethat a user gains access to tables only via stored program codethat restricts the types of operations that can be performed onthose tables and that can implement various business and dataintegrity rules For instance by establishing a stored programas the only mechanism available for certain table inserts orupdates we can ensure that all of these operations arelogged and we can prevent any invalid data entry frommaking its way into the table

In the event that this application account is compromised(for instance if the password is cracked) attackers still will beable to execute only our stored programs as opposed to beingable to run any ad hoc SQL Although such a situation consti-tutes a severe security breach at least we are assured thatattackers will be subject to the same checks and logging asnormal application users They also will be denied the opportu-nity to retrieve information about the underlying databaseschema (because the ability to run standard SQL will be grant-ed to the procedure not the user) which will hinder attemptsto perform further malicious activities

Another security advantage inherent in stored programs istheir resistance to SQL injection attacks An SQL injectionattack can occur when a malicious user manages to ldquoinjectrdquoSQL code into the SQL code being constructed by the applica-

MySQL 5 Stored ProceduresRelic or RevolutionStored procedures bring the legacy advantages and challenges to MySQL GUY HARRISON

SYSTEM ADMINISTRATION

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 35: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

tion Stored programs do not offer the only protection againstSQL injection attacks but applications that rely exclusivelyon stored programs to interact with the database are largelyresistant to this type of attack (provided that those storedprograms do not themselves build dynamic SQL stringswithout fully validating their inputs)

Data AbstractionIt is generally a good practice to separate your data access codefrom your business logic and presentation logic Data accessroutines often are used by multiple program modules and arelikely to be maintained by a separate group of developers Avery common scenario requires changes to the underlying datastructures while minimizing the impact on higher-level logicData abstraction makes this much easier to accomplish

The use of stored programs provides a convenient way ofimplementing a data access layer By creating a set of storedprograms that implement all of the data access routinesrequired by the application we are effectively building an APIfor the application to use for all database interactions

Reducing Network TrafficStored programs can improve application performance radicallyby reducing network traffic in certain situations

Itrsquos commonplace for an application to accept input froman end user read some data in the database decide whatstatement to execute next retrieve a result make a decisionexecute some SQL and so on If the application code is writtenentirely outside the database each of these steps wouldrequire a network round trip between the database and theapplication The time taken to perform these network tripseasily can dominate overall user response time

Consider a typical interaction between a bank customerand an ATM machine The user requests a transfer of fundsbetween two accounts The application must retrieve the bal-ance of each account from the database check withdrawallimits and possibly other policy information issue the relevantUPDATE statements and finally issue a commit all beforeadvising the customer that the transaction has succeededEven for this relatively simple interaction at least six separatedatabase queries must be issued each with its own networkround trip between the application server and the databaseFigure 1 shows the sequences of interactions that would berequired without a stored program

On the other hand if a stored program is used to imple-ment the fund transfer logic only a single database interactionis required The stored program takes responsibility for check-ing balances withdrawal limits and so on Figure 2 shows thereduction in network round trips that occurs as a result

Network round trips also can become significant when anapplication is required to perform some kind of aggregate pro-cessing on very large record sets in the database For instanceif the application needs to retrieve millions of rows in order tocalculate some sort of business metric that cannot be comput-ed easily using native SQL such as average time to completean order a very large number of round trips can result In sucha case the network delay again may become the dominantfactor in application response time Performing the calculationsin a stored program will reduce network overhead whichmight reduce overall response time but you need to be sure

to take into account the differences in raw computationspeed which I discuss later in this article

Creating Common Routines across MultipleApplicationsAlthough it is commonplace for a MySQL database to be atthe service of a single application it is not at all uncommonfor multiple applications to share a single database Theseapplications might run on different machines and be written in

www l inux journa l com | 3 5

Figure 1 Network Round Trips without Stored Procedure

Figure 2 Network Round Trips with Stored Procedure

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 36: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

different languages it may be hard or impossible for theseapplications to share code Implementing common code instored programs may allow these applications to share criticalcommon routines

For instance in a banking application transfer of fundstransactions might originate from multiple sources including abank tellerrsquos console an Internet browser an ATM or a phonebanking application Each of these applications could conceiv-ably have its own database access code written in largelyincompatible languages and without stored programs wemight have to replicate the transaction logic including log-ging deadlock handling and optimistic locking strategies inmultiple places and in multiple languages In this scenarioconsolidating the logic in a database stored procedure canmake a lot of sense

Not Built for SpeedIt would be terribly unfair of us to expect the first release ofthe MySQL stored program language to be blisteringly fastAfter all languages such as Perl and PHP have been the sub-ject of tweaking and optimization for about a decade while

the latest generation of programming languagesmdashNET andJavamdashhas been the subject of a shorter but more intensiveoptimization process by some of the biggest software companies in the world So right from the start we mightexpect that the MySQL stored program language would lag in comparison with the other languages commonlyused in the MySQL world

Still itrsquos important to get a sense of the raw performanceof the language First letrsquos see how quickly the stored programlanguage can crunch numbers The first example compares astored procedure calculating prime numbers against an identicalalgorithm implemented in alternative languages

In this computationally intensive trial MySQL performedpoorly compared with other languagesmdashfive times slowerthan PHP or Perl and dozens of times slower than Java NETor C (Figure 3)

Most of the time stored programs are dominated bydatabase access time where stored programs have a naturalperformance advantage over other programming languagesbecause of their lower network overhead However if you arewriting a number-crunching routine and you have a choice

3 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Figure 3 Stored procedures are a poor choice for number crunching

Figure 4 Stored procedures outperform when the network is a factor

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 37: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

between implementing it in the stored program language or inanother language such as PHP or Java you may wisely decideagainst using the stored program solution

If the previous example left you feeling less than enthusias-tic about stored program performance this next exampleshould cheer you right up Although stored programs arenrsquotparticularly zippy when it comes to number crunching it isdefinitely true that you donrsquot normally write stored programssimply to perform math stored programs almost always pro-cess data from the database In these circumstances the dif-ference between stored program and PHP or Java performanceis usually minimal unless network overhead is a big factorWhen a program is required to process large numbers of rowsfrom the database a stored program can substantially outper-form programs written in client languages because it does nothave to wait for rows to be transferred across the networkmdashthe stored program runs inside the database Figure 4 showshow a stored procedure that aggregates millions of rows canperform well even when called from a remote host across thenetwork while a Java program with identical logic suffers fromsevere network-driven response time degradation

Logic FragmentationAlthough it is generally useful to encapsulate data access logicinside stored programs it is usually inadvisable to ldquofragmentrdquobusiness and application logic by implementing some of it instored programs and the rest of it in the middle tier or theapplication client

Debugging application errors that involve interactionsbetween stored program code and other application code maybe many times more difficult than debugging code that is com-pletely encapsulated in the application layer For instance thereis currently no debugger that can trace program flow from theapplication code into the MySQL stored program code

Also if your application relies on stored procedures thatrsquosan additional skill that you or your team will have to acquireand maintain

Object-Relational MappingItrsquos becoming increasingly common for an Object-RelationalMapping (ORM) framework to mediate interactions betweenthe application and the database ORM is very common in Java(Hibernate and EJB) almost unavoidable in Ruby on Rails(ActiveRecord) and far less common in PHP (though thereare an increasing number of PHP ORM packages available)ORM systems generate SQL to maintain a mapping betweenprogram objects and database tables Although most ORMsystems allow you to overwrite the ORM SQL with your owncode such as a stored procedure call doing so negatessome of the advantages of the ORM system In short storedprocedures become harder to use and a lot less attractivewhen used in combination with ORM

Are Stored Procedures PortableAlthough all relational databases implement a common set ofSQL syntax each RDBMS offers proprietary extensions to thisstandard SQL and MySQL is no exception If you are attempt-ing to write an application that is designed to be independentof the underlying database you probably will want to avoidthese extensions in your application However sometimes

yoursquoll need to use specific syntax to get the most out of theserver For instance in MySQL you often will want to employMySQL hints execute non-ANSI statements such as LOCKTABLES or use the REPLACE statement

Using stored programs can help you avoid RDBMS-depen-dent code in your application layer while allowing you to con-tinue to take advantage of RDBMS-specific optimizations Intheory stored program calls against different databases can bemade to look and behave identically from the applicationrsquos per-spective You can encapsulate all the database-dependent codeinside the stored procedures Of course the underlying storedprogram code will need to be rewritten for each RDBMS but atleast your application code will be relatively portable

However there are differences between the variousdatabase servers in how they handle stored procedure callsespecially if those calls return result sets MySQL SQL Serverand DB2 stored procedures behave very similarly from theapplicationrsquos point of view However Oracle and Postgres callscan look and act differently especially if your stored procedurecall returns one or more result sets

So although using stored procedures can improve theportability of your application while still allowing you toexploit vendor-specific syntax they donrsquot make your appli-cation totally portable

Other ConsiderationsMySQL stored programs can be used for a variety of tasks inaddition to traditional application logic

Triggers are stored programs that fire when data modificationlanguage (DML) statements execute Triggers can automatedenormalization and enforce business rules without requiringapplication code changes and will take effect for all applica-tions that access the database including ad hoc SQL

The MySQL event scheduler introduced in the 51 releaseallows stored procedure code to be executed at regularintervals This is handy for running regular applicationmaintenance tasks such as purging and archiving

The MySQL stored program language can be used to createfunctions that can be called from standard SQL This allowsyou to encapsulate complex application calculations in afunction and then use that function within SQL calls Thiscan centralize logic improve maintainability and if usedcarefully improve performance

You DecideThe bottom line is that MySQL stored procedures give youmore options for implementing your application and there-fore are undeniably a ldquogood thingrdquo Judicious use of storedprocedures can result in a more secure higher performing andmaintainable application However the degree to which anapplication might benefit from stored procedures is greatlydependent on the nature of that application I hope this articlehelps you make a decision that works for your situation

Guy Harrison is chief architect for Database Solutions at Quest Software (wwwquestcom) Thisarticle uses some material from his book MySQL Stored Procedure Programming (OrsquoReilly 2006with Steven Feuerstein) Guy can be contacted at guyharrisonquestcom

wwwl inux journa l com | 3 7

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 38: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

3 8 | www l inux journa l com

In every work environment with which I have been involvedcertain servers absolutely always must be up and running forthe business to keep functioning smoothly These servers pro-vide services that always need to be availablemdashwhether it be adatabase DHCP DNS file Web firewall or mail server

A cornerstone of any service that always needs be up withno downtime is being able to transfer the service from onesystem to another gracefully The magic that makes thishappen on Linux is a service called Heartbeat Heartbeat isthe main product of the High-Availability Linux Project

Heartbeat is very flexible and powerful In this article Itouch on only basic activepassive clusters with two memberswhere the active server is providing the services and thepassive server is waiting to take over if necessary

Installing HeartbeatDebian Fedora Gentoo Mandriva Red Flag SUSE Ubuntuand others have prebuilt packages in their repositories Checkyour distributionrsquos main and supplemental repositories for apackage named heartbeat-2

After installing a prebuilt package you may see aldquoHeartbeat failurerdquo message This is normal After theHeartbeat package is installed the package manager is tryingto start up the Heartbeat service However the service does

not have a valid configuration yet so the service fails to startand prints the error message

You can install Heartbeat manually too To get the mostrecent stable version compiling from source may be necessaryThere are a few dependencies so to prepare on my Ubuntusystems I first run the following command

sudo apt-get build-dep heartbeat-2

Check the Linux-HA Web site for the complete list ofdependencies With the dependencies out of the waydownload the latest source tarball and untar it Use theConfigureMe script to compile and install Heartbeat Thisscript makes educated guesses from looking at your environ-ment as to how best to configure and install Heartbeat It alsodoes everything with one command like so

sudo ConfigureMe install

With any luck yoursquoll walk away for a few minutes andwhen you return Heartbeat will be compiled and installed onevery node in your cluster

Configuring HeartbeatHeartbeat has three main configuration files

etchadauthkeys

etchadhacf

etchadharesources

The authkeys file must be owned by root and be chmod600 The actual format of the authkeys file is very simple itrsquosonly two lines There is an auth directive with an associatedmethod ID number and there is a line that has the authentica-tion method and the key that go with the ID number of theauth directive There are three supported authenticationmethods crc md5 and sha1 Listing 1 shows an exampleYou can have more than one authentication method ID butthis is useful only when you are changing authenticationmethods or keys Make the key longmdashit will improve securityand you donrsquot have to type in the key ever again

The hacf FileThe next file to configure is the hacf filemdashthe main Heartbeatconfiguration file The contents of this file should be the sameon all nodes with a couple of exceptions

Heartbeat ships with a detailed example file in the docu-mentation directory that is well worth a look Also whencreating your hacf file the order in which things appearmatters Donrsquot move them around Two different examplehacf files are shown in Listings 2 and 3

The first thing you need to specify is the keepalivemdashthetime between heartbeats in seconds I generally like to havethis set to one or two but servers under heavy loads mightnot be able to send heartbeats in a timely manner So ifyoursquore seeing a lot of warnings about late heartbeats try

Getting Started withHeartbeatYour first step toward high-availability bliss DANIEL BARTHOLOMEW

SYSTEM ADMINISTRATION

Listing 1 The etchadauthkeys File

auth 1

1 sha1 ThisIsAVeryLongAndBoringPassword

Sometimes therersquos only one way tobe sure whether a node is deadand that is to kill it This is whereSTONITH comes in

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 39: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

increasing the keepaliveThe deadtime is next This is the time to wait without

hearing from a cluster member before the surviving membersof the array declare the problem host as being dead

Next comes the warntime This setting determines howlong to wait before issuing a ldquolate heartbeatrdquo warning

Sometimes when all members of a cluster are bootedat the same time there is a significant length of timebetween when Heartbeat is started and before the net-work or serial interfaces are ready to send and receiveheartbeats The optional initdead directive takes care ofthis issue by setting an initial deadtime that applies onlywhen Heartbeat is first started

You can send heartbeats over serial or Ethernet linksmdasheither works fine I like serial for two server clusters thatare physically close together but Ethernet works just aswell The configuration for serial ports is easy simply spec-ify the baud rate and then the serial device you are usingThe serial device is one place where the hacf files on eachnode may differ due to the serial port having differentnames on each host If you donrsquot know the tty to whichyour serial port is assigned run the following command

setserial -g devttyS

If anything in the output says ldquoUART unknownrdquo thatdevice is not a real serial port If you have several serial portsexperiment to find out which is the correct one

If you decide to use Ethernet you have several choices ofhow to configure things For simple two-server clusters ucast(uni-cast) or bcast (broadcast) work well

The format of the ucast line is

ucast ltdevicegt ltpeer-ip-addressgt

Here is an example

ucast eth1 192168130

If I am using a crossover cable to connect two hoststogether I just broadcast the heartbeat out of the appropriateinterface Here is an example bcast line

bcast eth3

There is also a more complicated method called mcast Thismethod uses multicast to send out heartbeat messages Checkthe Heartbeat documentation for full details

Now that we have Heartbeat transportation all sorted outwe can define auto_failback You can set auto_failback eitherto on or off If set to on and the primary node fails thesecondary node will ldquofailbackrdquo to its secondary standby state

www l inux journa l com | 3 9

Listing 2 The etchadhacf File on Briggs amp Stratton

keepalive 2

deadtime 32

warntime 16

initdead 64

baud 19200

On briggs the serial device is devttyS1

On stratton the serial device is devttyS0

serial devttyS1

auto_failback on

node briggs

node stratton

use_logd yes

Listing 3 The etchadhacf File on Deimos amp Phobos

keepalive 1

deadtime 10

warntime 5

udpport 694

deimos heartbeat ip address is 192168111

phobos heartbeat ip address is 192168121

ucast eth1 192168111

auto_failback off

stonith_host deimos wti_nps aresexamplecom erisIsTheKey

stonith_host phobos wti_nps aresexamplecom erisIsTheKey

node deimos

node phobos

use_logd yes

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 40: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

when the primary node returns If set to off when the primarynode comes back it will be the secondary

Itrsquos a toss-up as to which one to use My thinking isthat so long as the servers are identical if my primarynode fails then the secondary node becomes the primaryand when the prior primary comes back it becomes thesecondary However if my secondary server is not as powerful a machine as the primary similar to how thespare tire in my car is not a ldquorealrdquo tire I like the primary to become the primary again as soon as it comes back

Moving on when Heartbeat thinks a node is dead thatis just a best guess The ldquodeadrdquo server may still be up Insome cases if the ldquodeadrdquo server is still partially functionalthe consequences are disastrous to the other node mem-bers Sometimes therersquos only one way to be sure whethera node is dead and that is to kill it This is where STONITHcomes in

STONITH stands for Shoot The Other Node In The HeadSTONITH devices are commonly some sort of network power-control device To see the full list of supported STONITH devicetypes use the stonith -L command and use stonith -hto see how to configure them

Next in the hacf file you need to list your nodes List eachone on its own line like so

node deimos

node phobos

The name you use must match the output of uname -nThe last entry in my example hacf files is to turn

on logging

use_logd yes

There are many other options that canrsquot be touched onhere Check the documentation for details

The haresources FileThe third configuration file is the haresources file Before con-figuring it you need to do some housecleaning Namely allservices that you want Heartbeat to manage must be removedfrom the system init for all init levels

On Debian-style distributions the command is

usrsbinupdate-rcd -f ltservice_namegt remove

Check your distributionrsquos documentation for how to do thesame on your nodes

Now you can put the services into the haresources file

As with the other two configuration files for Heartbeatthis one probably wonrsquot be very large Similar to theauthkeys file the haresources file must be exactly the sameon every node And like the hacf file position is veryimportant in this file When control is transferred to anode the resources listed in the haresources file are start-ed left to right and when control is transfered to a differ-ent node the resources are stopped right to left Herersquosthe basic format

ltnode_namegt ltresource_1gt ltresource_2gt ltresource_3gt

The node_name is the node you want to be the primaryon initial startup of the cluster and if you turned onauto_failback this server always will become the primary nodewhenever it is up The node name must match the name ofone of the nodes listed in the hacf file

Resources are scripts located either in etchadresourcedor etcinitd and if you want to create your own resourcescripts they should conform to LSB-style init scripts like thosefound in etcinitd Some of the scripts in the resourced foldercan take arguments which you can pass using a on theresource line For example the IPAddr script sets the clusterIP address which you specify like so

IPAddr1921681924eth0

In the above example the IPAddr resource is told to set upa cluster IP address of 19216819 with a 24-bit subnet mask(2552552550) and to bind it to eth0 You can pass otheroptions as well check the example haresources file that shipswith Heartbeat for more information

Another common resource is Filesystem This resource is formounting shared filesystems Here is an example

Filesystemdevetherde10optdataxfs

The arguments to the Filesystem resource in the exampleabove are left to right the device node (an ATA-over-Ethernetdrive in this case) a mountpoint (optdata) and the filesystemtype (xfs)

4 0 | www l inux journa l com

SYSTEM ADMINISTRATION

Fortunately with logging enabledtroubleshooting is easy becauseHeartbeat outputs informative log messages

Listing 4 A Minimalist haresources File

stratton 192168141 apache2

Listing 5 A More Substantial haresources File

deimos

IPaddr192168121

Filesystemdevetherde10optstoragexfs

killnfsd

nfs-common

nfs-kernel-server

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 41: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

For regular init scripts in etcinitd simply enter them byname As long as they can be started with start and stoppedwith stop there is a good chance that they will work

Listings 4 and 5 are haresources files for two of theclusters I run They are paired with the hacf files in Listings 2and 3 respectively

The cluster defined in Listings 2 and 4 is very simple and ithas only two resourcesmdasha cluster IP address and the Apache 2Web server I use this for my personal home Web server clus-ter The servers themselves are nothing specialmdashan old PIIItower and a cast-off laptop The content on the servers is stat-ic HTML and the content is kept in sync with an hourly rsynccron job I donrsquot trust either ldquoserverrdquo very much but withHeartbeat I have never had an outage longer than half asecondmdashnot bad for two old castaways

The cluster defined in Listings 3 and 5 is a bit morecomplicated This is the NFS cluster I administer at workThis cluster utilizes shared storage in the form of a pair ofCoraid SR1521 ATA-over-Ethernet drive arrays two NFSappliances (also from Coraid) and a STONITH deviceSTONITH is important for this cluster because in the eventof a failure I need to be sure that the other device is really dead before mounting the shared storage on theother node There are five resources managed in this clus-ter and to keep the line in haresources from getting toolong to be readable I break it up with line-continuationslashes If the primary cluster member is having troublethe secondary cluster kills the primary takes over the IPaddress mounts the shared storage and then starts upNFS With this cluster instead of having maintenanceissues or other outages lasting several minutes to an hour(or more) outages now donrsquot last beyond a second or two I can live with that

TroubleshootingNow that your cluster is all configured start it with

etcinitdheartbeat start

Things might work perfectly or not at all Fortunately withlogging enabled troubleshooting is easy because Heartbeatoutputs informative log messages Heartbeat even will let youknow when a previous log message is not something you haveto worry about When bringing a new cluster on-line I usuallyopen an SSH terminal to each cluster member and tail themessages file like so

tail -f varlogmessages

Then in separate terminals I start up Heartbeat If thereare any problems it is usually pretty easy to spot them

Heartbeat also comes with very good documentationWhenever I run into problems this documentation hasbeen invaluable On my system it is located under theusrsharedoc directory

ConclusionIrsquove barely scratched the surface of Heartbeatrsquos capabilitieshere Fortunately a lot of resources exist to help you learnabout Heartbeatrsquos more-advanced features These include

activepassive and activeactive clusters with N number ofnodes DRBD the Cluster Resource Manager and more Nowthat your feet are wet hopefully you wonrsquot be quite as intimi-dated as I was when I first started learning about HeartbeatBe careful though or you might end up like me and want tocluster everything

Daniel Bartholomew has been using computers since the early 1980s when his parents purchased an Apple IIe After stints on Mac and Windows machines he discovered Linux in 1996 and has been using various distributions ever since He lives with his wife and children in North Carolina

wwwl inux journa l com | 4 1

Resources

The High-Availability Linux Project wwwlinux-haorg

Heartbeat Home Page wwwlinux-haorgHeartbeat

Getting Started with Heartbeat Version 2 wwwlinux-haorgGettingStartedV2

An Introductory Heartbeat Screencast linux-haorgEducationNewbieInstallHeartbeatScreencast

The Linux-HA Mailing List listslinux-haorgmailmanlistinfolinux-ha

The 1994ndash2007 Archive CDback issues and more

wwwLinuxJournalcomArchiveCD

Archive CD1994ndash2007

trade

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 42: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

4 2 | www l inux journa l com

In most enterprise networks today centralized authenti-cation is a basic security paradigm In the Linux realmOpenLDAP has been king of the hill for many years butfor those unfamiliar with the LDAP command-line interface(CLI) it can be a painstaking process to deploy Enter theFedora Directory Server (FDS) Released under the GPL inJune 2005 as Fedora Directory Server 71 (changed to ver-sion 10 in December of the same year) FDS has roots inboth the Netscape Directory Server Project and its sisterproduct the Red Hat Directory Server (RHDS) Some ofFDSrsquos notable features are its easy-to-use Java-basedAdministration Console support for LDAP3 and ActiveDirectory integration By far the most attractive feature ofFDS is Multi-Master Replication (MMR) MMR allows formultiple servers to maintain the same directory informationso that the loss of one server does not bring the directorydown as it would in a master-slave environment

Getting an FDS server up and running has its ups anddowns Once the server is operational however Red Hatmakes it easy to administer your directory and connect nativeFedora clients In addition to providing network authentica-tion you easily can extend FDS functionality across otherapplications such as NFS Samba Sendmail Postfix andothers In this article we focus solely on using FDS for network authentication and implementing MMR

InstallationTo begin download a Fedora 6 ISO readily available from oneof the many Fedora mirrors FDS has low hardware require-mentsmdash500MHz with 256MB RAM and 3GB or more space Irecommend at least a 1GHz processor or above with 512MBor more memory and 20GB or more of disk space This config-uration should perform well enough to support small business-es up to enterprises with thousands users As for supportedoperating systems not surprisingly Red Hat lists Fedora andRed Hat flavors of Linux HP-UX and Solaris also are supportedWith your bootable ISO CD start the Fedora 6 installation pro-cess and select your desired system preferences and packagesMake sure to select Apache during installation Set your hostand DNS information during the install using a fully qualifieddomain name (FQDN) You also can set this information post-install but it is critical that your host information is configuredproperly If you plan to use a firewall you need to enter two

ports to allow LDAP (389 default) and the Admin Console(default is random port) For the servers used here I choseports 3891 and 3892 because of an existing LDAP installationin my environment Fedora also natively supports Security-Enhanced Linux (SELinux) a policy-based lock-down tool ifyou choose to use it If you want to use SELinux you mustchoose the Permissive Policy

Once your Fedora 6 server is up download and install thelatest RPM of Fedora Directory Server from the FDS site (it isnot included in the Fedora 6 distribution) Running the RPMunpacks the program files to optfedora-ds At this pointdownload and install the current Java Runtime Environment(JRE) bin file from Sun before running the local setup ofFDS To keep files in the same place I created an optjavadirectory and downloaded and ran the bin file from thereAfter Java is installed replace the existing soft link to Javain etcalternatives with a link to your new Java installationThe following syntax does this

cd etcalternatives

rm java

ln -sf optjavajre150_09binjava java

Next configure Apache to start on boot with the chkconfig command

chkconfig -level 345 httpd on

Then start the service by typing

service httpd start

Now with the useradd command create an accountnamed fedorauser under which FDS will run After creating theaccount run optfedora-dssetupsetup to launch theFDS installation script Before continuing you may be present-ed with several error messages about non-optimal configura-tion issues but in most cases you can answer yes to get pastthem and start the setup process Once started select thedefault Install Mode 2 - Typical Accept all defaults duringinstallation except for the Server and Group IDs for which weare using the fedorauser account (Figure 1) and if you want tocustomize the ports as we have here set those to the correct

Fedora Directory Server the Evolution of LinuxAuthenticationCheck out Fedora Directory Server to authenticate your clients without licensing fees JERAMIAH BOWLING

SYSTEM ADMINISTRATION

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 43: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

numbers (38913892) You also may want to use the samepasswords for both the configuration Admin and DirectoryManager accounts created during setup

When setup is complete use the syntax returned fromthe script to access the admin console (startconsole -uadmin httpfullyqualifieddomainnameport) using theAdministrator account (default name is Admin) you specifiedduring the FDS setup You always can call the Admin consoleusing this same syntax from optfedora-ds

At this point you have a functioning directory serverThe final step is to create a startup script or directly editetcrcdrclocal to start the slapd process and the adminserver when the machine starts Here is an example of ascript that does this

optfedora-dsslapd-yourservernamestart-slapd

optfedora-dsstart-admin amp

Directory Structure and ManagementLooking at the Administration console you will see serverinformation on the Default tab (Figure 2) and a search utilityon the second tab On the Servers and Applications tabexpand your server name to display the two server consolesthat are accessible the Administration Server and the DirectoryServer Double-click the Directory Server icon and you aretaken to the Tasks tab of the Directory Server (Figure 3) Fromhere you can start and stop directory services manage certifi-cates importexport directory databases and back up yourserver The backup feature is one of the highlights of the FDSconsole It lets you back up any directory database on yourserver without any knowledge of the CLI The only caveat isthat you still need to use the CLI to schedule backups

On the Status tab you can view the appropriate logs tomonitor activity or diagnose problems with FDS (Figure 4)Simply expand one of the logs in the left pane to display itsoutput in the right pane Use the Continuous Refresh optionto view logs in real time

From the Configuration tab you can define replicas (copiesof the directory) and synchronization options with otherservers edit schema initialize databases and define options for

www l inux journa l com | 4 3

Figure 1 Install Options Figure 2 Main Console

Figure 3 Tasks Tab

Figure 4 Main Logs

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 44: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

logs and plugins The Directory tab is laid out by the domainsfor which the server hosts information The Netscape folder iscreated automatically by the installation and stores informationabout your directory The config folder is used by FDS tostore information about the serverrsquos operation In mostcases it should be left alone but we will use it when weset up replication

Before creating your directory structure you should carefullyconsider how you want to organize your users in the directoryThere is not enough space here for a decent primer on directoryplanning but I typically use Organizational Units (OUs) tosegment people grouped together by common securityneeds such as by department (for example Accounting)Then within an OU I create users and groups for simplifying

permissions or creating e-mail address lists (for exampleAccount Managers) FDS also supports roles which essentiallyare templates for new users This strategy is not hard and fastand you certainly can use the default domain directory struc-ture created during installation for most of your needs (Figure5) Whatever strategy you choose you should consult the RedHat documentation prior to deployment

To create a new user right-click an OU under your domainand click on NewrarrUser to bring up the New User screen(Figure 5) Fill in the appropriate entries At minimum youneed a First Name Last Name and Common Name (cn) whichis created from your first and last name Your login ID is createdautomatically from your first initial and your last name You alsocan enter an e-mail address and a password From the optionsin the left pane of the User window you can add NT Userinformation Posix Account information and activatede-activate

the account You can implement a Password Policy from theDirectory tab by right-clicking a domain or OU and selectingManage Password Policy However I could not get this featureto work properly

ReplicationSetting up replication in FDS is a relatively painless process Inour scenario we configure two-way multi-master replicationbut Red Hat supports up to four-way Because we already haveone server (server one) in operation we need another system(server two) configured the same way (Fedora FDS) withwhich to replicate Use the same settings used on server one(other than hostname) to install and configure Fedora 6FDSWith both servers up verify time synchronization and nameresolution between them The servers must have synced timeor be relatively close in their offset and they must be able toresolve the otherrsquos hostname for replication to work You canuse IP addresses configure etchosts or use DNS I recom-mend having DNS and NTP in place already to make life easier

The next step is creating a Replication Manager account tobind one serverrsquos directory to the other and vice versa On theDirectory tab of the Directory Server console create a useraccount under the config folder (off the main root not yourdomain) and give it the name Replication Manager whichshould create a login ID (uid) of RManager Use the samepassword for the RManager on both serversdirectories Onserver one click the Configuration tab and then the Replicationfolder In the right pane click Enable Changelog Click theUse Default button to fill in the default path under yourslapd-servername directory Click Save Next expand theReplication folder and click on the userroot database In theright pane click on enable replica and select Multiple Masterunder the Replica Role section (Figure 6) Enter a unique ReplicaID This number must be different on each server involved inreplication Scroll down to the update section below and in theEnter a new supplier DN textbox enter the following

uid=RManagercn=config

Click Add and then the Save button at the bottom On

4 4 | www l inux journa l com

SYSTEM ADMINISTRATION

In addition to providing networkauthentication you easily can extendFDS functionality across other applications such as NFS SambaSendmail Postfix and others

Figure 5 User Screen Figure 6 Setting Up Replica

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 45: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

server two perform the same steps as just completed for creating the RManager account in the config folderenabling changelogs and creating a Multiple MasterReplica (with a different Replica ID)

Now you need to set up a replication agreement backon server one From the Configuration tab of the DirectoryServer right-click the userroot database and select NewReplication Agreement A wizard then guides you throughthe process Enter a name and description for the agree-ment (Figure 7) On the Source and Destination screenunder Consumer click the Other button to add server two as a consumer You must use the FQDN and the correct port (3891 in our case) Use Simple Authenticationin the Connection box and enter the full context(uid=Rmanagercn=config) of the RManager account andthe password for the account (Figure 8) On the nextscreen check off the box to enable Fractional Replicationto send only deltas rather than the entire directorydatabase when changes occur which is very useful overWAN connections On the Schedule screen select Always

keep directories in sync to keep You also could choose toschedule your updates On the Initialize Consumer screenuse the Initialize Now option If you experience any diffi-culty in initializing a consumer (server two) you can usethe Create consumer initialization file option and copy theresulting ldif folder to server two and import and initializeit from the Directory Server Tasks pane When you clicknext you will see the replication taking place A summaryappears at the end of the process To verify replicationtook place check the Replication and Error logs on thefirst server for success messages To complete MMR set up a replication agreement on the server listing server oneas the consumer but do not initialize at the end of thewizard Doing so would overwrite the directory on serverone with the directory on server two With our setup complete we now have a redundant authentication infrastructure in place and if we choose we can addanother two Read-Write replicasMasters

Authenticating Desktop ClientsWith our infrastructure in place we can connect our desktopclients For our configuration we use native Fedora 6 clientsand Windows XP clients to simulate a mixed environmentOther Linux flavors can connect to FDS but for space con-straints we wonrsquot delve into connecting them It should benoted that most distributions like Fedora use PAM theetcnsswitchconf and etcldapconf files to set up LDAPauthentication Regardless of client type the user accountattempting to log in must contain Posix information in thedirectory in order to authenticate to the FDS server To connectFedora clients use the built-in Authentication utility availablein both GNOME and KDE (Figure 9) The nice thing about theutility is that it does all the work for you You do not have toedit any of the other files previously mentioned manuallyOpen the utility and enable LDAP on the User Information tab and the Authentication tabs Once you click OK to these settings Fedora updates your nsswitchconf andetcpamdsystem-auth files immediately Upon reboot yoursystem now uses PAM instead of your local passwd and shadowfiles to authenticate users

www l inux journa l com | 4 5

Figure 7 Enter a name and password

Figure 8 Enter information for the RManager account

Figure 9 The Built-in Authentication Utility

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525

Page 46: Linux Journal | System Administration Special Issue | 2009 · 2012-05-10 · 5 SPECIAL_ISSUE.TAR.GZ System Administration: Instant Gratification Edition Shawn Powers 6 CFENGINE FOR

During login the local system pulls the LDAP accountrsquosPosix information from FDS and sets the system to matchthe preferences set on the account regarding home directoryand shell options With a little manual work you also canuse automount locally to authenticate and mount networkvolumes at login time automatically

Connecting XP clients is almost as easy TypicallyNT2000XP users are forced to use the built-in MSGINAdll

to authenticate to Microsoft networks only In the pastvendors such as Novell have used their own proprietaryclients to work around this but now the open-sourcepgina client has solved this problem To connect 2000XPclients download the main pgina zipfile from the projectpage on SourceForge and extract the files For this articleI used version 184 as I ran into some dll issues with

version 188 You also need to download and extract thePlugin bundle Run the x86 installer from the extractedfiles accepting all default options but do not start theConfiguration Tool at the end Next install the LDAPAuthplugin from the extracted Plugin bundle When doneinstalling open the Configuration Tool under the PginaProgram Group under the Start menu On the Plugin tabbrowse to your ldapauth_plusdll in the directory specifiedduring the install Check off the option to Show authenti-cation method selection box This gives you the option oflogging locally if you run into problems Without this theonly way to bypass the pgina client is through Safe ModeNow click on the Configure button and enter the LDAPserver name port and context you want pgina to use tosearch for clients I suggest using the Search Mode as yourLDAP method as it will search the entire directory if it can-not find your user ID Click OK twice to save your settingsUse the Plugin Tester tool before rebooting to load yourclient and test connectivity (Figure 10) On the next loginthe user will receive the prompt shown in Figure 11

ConclusionsFDS is a powerful platform and this article has barelyscratched the surface There simply is not room to squeezeall of FDSrsquos other features such as encryption or AD syn-chronization into a single article If you are interested inthese items or want to know how to extend FDS to otherapplications check out the wiki and the how-tos on theprojectrsquos documentation page for further informationJudging from our simple configuration here FDS seemsevolutionary not revolutionary It does not change the wayin which LDAP operates at a fundamental level What itdoes do is take the complex task of administering LDAPand makes it easier while extending normally commercialfeatures such as MMR to open source By adding pginainto the mix you can tap further into FDSrsquos flexibility andcost savings without needing to deploy an array of servicesto connect Windows and Linux clients So if you are look-ing for a simple reliable and cost-saving alternative toother LDAP products consider FDS

Jeramiah Bowling has been a systems administrator and network engineer for more thanten years He works for a regional accounting and auditing firm in Hunt Valley Marylandand holds numerous industry certifications including the CISSP Your comments are welcomeat jb50cyahoocom

4 6 | www l inux journa l com

SYSTEM ADMINISTRATION

Setting up replication in FDS is arelatively painless process

Figure 11 Login Prompt

Figure 10 Plugin Tester Tool

Resources

Main Fedora Site fedoraredhatcom

Fedora ISOs fedoraredhatcomDownload

Fedora Directory Server Site directoryfedoraredhatcom

Main pgina Site wwwpginaorg

pgina Downloads sourceforgenetprojectshowfilesphpgroup_id=53525