no.5 autumn 2006 omnibus or tec? what you need to know · customisation one of the key benefits...
TRANSCRIPT
No.5 Autumn 2006
With the acquisition of Micromuseearlier in the year, IBM have greatlyenhanced their Systems Managementportfolio with the NetCool suite ofproducts. One such product is Omnibus.
Omnibus has been exceptionally
successful in companies that require a
very high throughput of events, such as
Telecommunications companies. Of the 25
largest telecomms companies, Omnibus is
used in the Network Operation Centres
(NOC) of 24 of them.
However, as IBM already had the Tivoli
Enterprise Console (TEC), there are now
two products that, at least superficially, do
the same thing. This article is intended for
TEC users to provide a summary of the
functionality, capabilities, benefits and
downsides of Omnibus.
Core Component ComparisonTo place the two products into
perspective, the table below outlines the
core and sub components that make up
Omnibus and relates, where possible, the
equivalent TEC components. The article
then gives a brief overview of each
Omnibus component and how they differ
in the two products.
Object Server
Database
The key product within the NetCool suite
of products is Omnibus and the core
component of that is the Object Server.
Key to the Object Server’s event
management capabilities is its internal
memory-resident database; allowing it to
process an extremely high throughput of
events without the degradation in
performance due to the constraints of disk
I/O. This speed of throughput is the main
reason why it has been so successful
within Telecommunications companies.
The Object Server is responsible for event
management and automation by
manipulating event data through a series
of SQL triggers or stored procedures using
a sub set of ANSI SQL known as Object
Server SQL. This is key to the ability of the
Object server for data definition,
manipulation and administration.
Automation
The methods in which the two products
handle automations are completely
different with respect to the technologies
they use. TEC processes events serially by
storing the event in the reception log first,
evaluating it and processing it using a
prolog rules engine, before finally storing it
in the event table of the RDBMS. In
Omnibus, the validation of event data
against the Object Server’s event schema
occurs during the initialisation of the probe
or event list. This is shown clearly in the
Customisation section when we add a new
News in a Minute
Composite ApplicationManagement
We Have Moved
Using Odyssey to Deliver withITM 6
IBM Tivoli Monitoring as aBusiness Services Viewer
How can buying new softwarereduce your licensing costs?
Building data flows with IBMTivoli Directory Integrator
Using ITM6x Firewall GatewayFeature
Inside
7
6/7
5
8/9
10-12
13
14/15
16
continued on p2
Omnibus or TEC? What you need to know...by Paul Campbell
Function Omnibus Tivoli Enterprise ConsoleEvent Management Object Server TEC Event ServerDatabase Internal Memory-Resident External DatabaseAutomation SQL, SQL Triggers and Procedures Proprietary Prolog rules engineDistributed Event No Equivalent TEC Gateway and State Correlation Management Engine (SCE)Adapters Probes TEC AdaptersEvent Consoles Event List TEC Console or WAS TEC ConsoleEvent Visualisation WebTop Third Party or CustomisedEvent Enrichment Impact Third Party or CustomisedScalability, Uni- and Bi-directional Gateways. CustomAvailability and Fail and Fail back capabilities IntegrationCommunications IDUC over TCP/IP Framework (RIM Object)Licensing Flex License Server required for Trust Model
validation of component and subcomponents
Configuration Conductor and Config Manager GUI TEC Console or command lineUtilities or command line
Message Broker No.5 Autumn 2006
2
field to the Object Server later in the
article. This removes the need for defining
TEC Classes and event attributes through
BAROC files and also the need for a
validation process.
The actual processing and evaluation of
events in Omnibus is handled by SQL
Triggers or SQL procedures executed from
triggers. Like TEC rules, triggers fire on
conditions being met within the Object
Server. There are 3 types of these:
There are a number of generic triggers
that are defined and activated during the
creation of the Object Server schema.
These triggers handle common event
management processes like de-
duplication, event correlation and
housekeeping. A good example of the
differences is with de-duplication. In the
TEC, the dup_detect facet, defined in a
BAROC file against a particular event
attribute and class, is used to determine
whether two events are the same in
conjunction with a rule predicate like
first_duplicate. However, any subsequent
changes to the BAROC file require the
rulebase to be recompiled, reloaded and
the Event Server restarted. In Omnibus,
duplication is more dynamic and flexible.
Duplication of events is defined by a
unique Identifier attribute and is often
defined at the probe level by
concatenating several other attributes to
form an Identifier key. The Object Server
then uses this unique field to compare
events and match those with the same
value. This allows for similar events to be
dropped. The following shows how the
Identifier field is defined within the
simulation probe:
@Identifier = $Node + $Agent +$Severity + $Group
Both triggers and TEC rules do a very good
job in providing automation and event
management. However, it’s worth noting
that Omnibus can be extended by utilising
other products within the NetCool suite
such as WebTop and Impact. These
products complement Omnibus by
providing event visualisation and further
event enrichment through integration with
external data sources and adding further
context to the original Omnibus event.
High Availability
Tivoli provides a means to configure TEC
Gateways to forward events to a
secondary TEC Server if the primary TEC
server becomes unavailable. However,
there are no means to automatically fail-
over the TEC Consoles so they point to the
correct server if one of the servers goes
down. Often this,
and the fail back
process,
becomes a
manual task and
introduces even
more headaches
when ticketing
systems are
involved.
Omnibus
provides a
mechanism out
of the box for
failing over and
automatically
(without user
intervention)
failing back
probes, gateways and desktops between a
primary and backup Object Server.
To configure fail-over between two Object
Servers you must first configure a ‘Virtual’
Object Server and it is this that you point
your probes/events list to. The screenshot
above details the fail-over configuration
using the nco_xigen tool. As you can see,
a VIRTUAL Object Server has been defined
on host orbomnibus with a TCP port 4100.
This actually points to the PRIMARY
Object Server as both TCP ports match.
The backup server, represented as Backup
1, resides on the host, orbomnibus-bk
Database Database driven condition thatmatches the condition definedwithin the trigger
Temporal Execution based on a timerexpiring
Signal System or User defined conditionwhen a signal is raised
Message Broker No.5 Autumn 2006
3
with a defined TCP port of 4101 which
actually points to the BACKUP_A Object
Server.
Further configurations are required for fail
back. For instance, at the probe layer, you
need to specify network timeout and poll
values so the probe will attempt to detect
when the primary server has become
available again and the backup Object
Server needs to be defined as a
BackupObjectServer through its
properties file.
Once this is in place and you shutdown
the primary server, the event list detects a
server failure and prompts a logon to the
backup server via the virtual Object
Server.
It’s usually a good idea to ensure userids
and passwords are synced between the
two Object Servers for a failover to occur
smoothly. Once the correct credentials are
set, the event list is refreshed from the
backup server. The probe would fail over
automatically.
When the primary Object Server becomes
available again, the event list detects this
and prompts you to reconnect to the
primary server.
The status section on the bottom right of
the event list lets you know which Object
Server you are currently connected to.
Customisation
One of the key benefits Omnibus has over
the TEC Console is the ability to easily
customise the event server database by
adding or modifying new databases,
tables and columns and being able to
incorporate these new additions within
the running events list. This was
something that was not easily achieved
within a TEC Console and something that
could definitely not take place without
restarting the server.
The example above adds a regional field
to the ObjectServer and console and
demonstrates how easily this can be
accomplished using the standard tools
available through the Object Server.
Adding a field to the Object Server’s event
table (alerts.status) can be accomplished
either through the command line or the
GUI. The screen shot below shows the
adding of the new field using the nco_conf
tool. This tool is the heart of the Object
Server and can be used to configure and
manage every aspect of the Object
Server(s).
Once the new field has been added a
restart of the Object Server isn’t
necessary. However, all event lists or
consoles that are open need to be
refreshed. This is quite easily
accomplished by performing a ‘resync’
from the event list GUI. Once this has
been done the new field can be added as
a column to the event list.
continued on p4
Message Broker No.5 Autumn 2006
And we can then see that the event list
will now include the new ‘Region Slot’.
Desktop
The desktop component is made up of the
Conductor, the Event List and Tools. The
Conductor, as the name suggests, is the
top level GUI from where other GUI's are
launched or conducted, one of which is
the Event List.
The Event list is the equivalent to a
standard TEC console. Much as in a TEC
Console, it is made up of one or more
Monitor Boxes that represent subsets of
events based on filters. This view provides
an executive summary of the events being
represented by the filter. For example,
filters can be set up to represent all events
that are in maintenance mode or all
critical events that have occurred in the
last 10 minutes. From each Monitor Box
you are able to drill down and list the
events defined within a sub-event list
filter. From within the sub-event list, tools
(such as ping or ssh or internal SQL
commands) can be run against selected
events using the context of that event to
pass details to the tools.
Probes
Omnibus Probes are key components of
the Omnibus product and are responsible
for obtaining key information by either
receiving the information from another
source or analysing a local source and
passing on the event information to the
Object Server. Probes can be compared to
4
TEC adapters in regards to functionality.
However, Omnibus Probes come in many
shapes and forms being both generic and
vendor-specific. For example, similar to
the TEC Adapters, there are the generic
probes for log scraping, syslogd
monitoring, and SNMP Traps but there are
also Universal probes that are more
closely related to the functionality that
exists within the Tivoli Monitoring 6.1
Universal Agent. For example, the Exec
Probe, Generic ODBC Probe, STDIN Probe
and TCP/IP Socket Probe are all available.
There are also specific probes that have
been designed and developed by the
vendor to integrate their own Enterprise
Management Systems into Omnibus or to
interface directly with a particular
Application or Hardware device.
One of key benefits to the probe over the
TEC adapter is the ability to parse events
for duplication or enrich key events at the
probe’s source before passing the event to
the Object Server. In the Tivoli world,
events sent from Adapters can be filtered
or discarded at the source but any parsing
of events means that events have to pass
though the State Correlation Engine or
end up at the TEC’s rulebase before any
manipulation of the event could take
place. However, at the probe level, event
enrichment is possible through the
probe's local rules file; a lightweight,
shell-type scripting language with inbuilt
functions capable of string and arithmetic
manipulation, regular expressions and
conditional logic. For example, where the
region field was added earlier, we might
have created a probe rule as shown
below:
if(regmatch($Node,".*_a$|.*_b$")){@Region = "South West"
}else if(regmatch($Node,".*_c$|.*_d$")){
@Region = "North West"}else if(regmatch($Node,".*_e$|.*_f$")){
@Region = "South East"}else if(regmatch($Node,".*_g$|.*_h$")){
@Region = "South West"}else{
@Region = "Unknown"}
Like TEC Adapters, a probe also has the
ability to cache or store events locally if
the connection to the Object Server has
been lost and re-send the stored events as
soon as the connection has been re-
established. However, more importantly, a
probe can be configured to automatically
fail over to a backup Object Server based
on timeouts being met.
For those of you interested in utilising the
TME TECAD probe for integrating ITM 6.1,
Tivoli Enterprise Console and Netview
events into Omnibus then please take a
look a the following tip on our Web Site:
http://www.orb-data.com/index.php?pageId=791
Omnibus Gateways
To Integrate the Tivoli Enterprise Console
with problem management systems or
asset management databases often
requires the need for complex rule writing
or a method of extracting the events table
using a 3rd party tool; be it Perl, SQL,
5
Message Broker No.5 Autumn 2006
procedures or triggers. Ensuring the
extracted data is in the correct format can
often be time consuming and can lead to
inconsistencies if changes occur to the
event data down the line.
To aid in the integration of the Object
Server, Omnibus provides a series of
Gateways that extract data out of the
Object Server and into some form of
destination. The form varies by gateway
type and varies again like the probes from
generic gateways i.e ODBC, SNMP or Flat
file to vendor specific gateways like Oracle
Databases, Peregrine, Remedy to Tivoli
related gateways.
More importantly, gateways can provide
failover and resilience capabilities to the
Object Server by synchronising event data
in a bi-directional flow between two
Object Servers. This is known as a Bi-
Directional gateway. A Uni-Directional
gateway is available as a mechanism for
building a hierarchical architecture of
Object Servers by moving specific events
to specific Object Servers for presentation
to specific customers or organisations.
Licensing
If you’re excited by what you’ve read and
it’s whetted your appetite for what
Omnibus has to offer with regards to
functionality, reliability and performance
then I think at this point it’s worth
mentioning licensing!
All NetCool products require a unique
license per product and, for the most part,
per component (each probe, each
gateway, each desktop). Each license is
loaded into a central licensing server
where they are checked in and checked
out each time the product or component is
activated. Some of the products produce a
heartbeat to ensure the actual license or
the numbers of licenses are available or
the license hasn’t expired so the product
can remain operational. In addition, these
licenses are tied to specific hostids which
mean each time you install you will need
to apply for a new license. Luckily IBM has
stated that the license mechanism will
eventually fit into their current Trust
model and the license server function will
be scrapped.
Technical Workshops
In the New Year we will be running
technical workshops in our offices in
Burnham to enable customers to get a
good understanding of the capabilities of
the new products.
If you would like to register interest inattending these events please send yourdetails to [email protected].
IBM’s Acquisitions bolsterthe Tivoli line IBM has bought several companies in thepast year, and it has begun integratingtheir technologies with its Tivoli suite.Alongside the availability tools from itspurchase of MicroMuse, it hasincorporated discovery tools fromCollation into the Tivoli Change andConfiguration Management database. Thedatabase will show IT executives whethera specific business service is available,while another user interface will showadministrators whether specific hardwarecomponents are failing.
In addition, the company also intends tounveil a roadmap for adding productsfrom its purchase of MRO Software; adeal that is expected to close "verysoon", said Al Zollar, the generalmanager of IBM's Tivoli division. MROSoftware's Maximo application isdesigned to manage equipment such aspower plants.
Google plans solar-powered HQ Google plans a solar-powered electricitysystem at its Silicon Valley headquartersthat will rank as the largest US solar-powered corporate office complex.Google said it is set to begin building a
rooftop solar-powered generation systemat its California headquarters capable ofgenerating 1.6 megawatts of electricity;or enough to power 1,000 Californiahomes. "We are going to be producingroughly 30 percent of the power that weuse." said David Radcliffe, vice presidentof real estate.
Government IT projects 'running 17years late' According to a list compiled by the LiberalDemocrats, IT projects are running a totalof 17 years and three months late.Delayed projects include the Child TrustFund (six months late) and the PensionSchemes project (one year behindschedule). Other delays are affecting theimplementation of BS7799 compliancefor information security in theGovernment's Actuary department whichis over three years behind schedule, andeContact Exploitation which is expectedto be five months late.
Theresa Villiers, Conservative shadowchief secretary to the Treasury,commented: "These latest figuresdemonstrate just how Gordon Brown hasmanaged to spend so much and achieveso little. If he can't run an IT project,how's he going to run the country?"
News in a MinuteThe Virtual Keyboard is here We can’t go this time with mentioningthe I-Tech Virtual Laser Keyboard (VKB)that uses both infrared and lasertechnology to generate an invisible fieldand project a full-size virtual QWERTYkeyboard on any surface. You can usethe VKB with both PCs and compatiblemobile devices, Smartphone and PDA.Direction technology based on opticalrecognition enables the user to tap theimages of the keys, complete withrealistic tapping sounds, which feedsinto the compatible PDA, Smartphone,laptop or PC.
All we need now is a projected screenand we will be able to have a full-sizecomputer in the space of a Mars bar.The I-Tech Virtual Laser Keyboard isavailable now at http://www.virtual-laser-keyboard.com.
6
Message Broker No.5 Autumn 2006
Composite Application Managementby Dave Webb
Changes to SystemManagementBusinesses are finding that the natureof system management is changingsignificantly.
In the past, applications were usually
contained on a single server and
“management” consisted of little more
than availability monitoring. Now,
applications have their business logic and
data spread over a number of different
systems, which may be Web servers,
Database servers, J2EE application
servers, integration middleware or even
mainframe and other legacy systems.
Whilst availability monitoring is still
useful, the availability of all the individual
components is no guarantee of a healthy
application.
To complicate matters further,
organisations have much greater
expectations of System Management. As
many of these new applications are
developed in-house, the role of Systems
Management extends from problem
detection to troubleshooting, problem
solving and helping development teams
by providing information from production
systems. Additionally, information has to
be provided elsewhere for things such as
Service Level compliance reports, which
often require data about application
response times rather than information
about the availability of individual
components.
Accurate Monitoring The most significant challenge is to
provide accurate monitoring of an
application that consists of many different
components. By considering what
“accurate monitoring” means, the solution
to the problem becomes obvious. Any
monitoring should reflect the end-user
experience of this application, and the
simplest way to do this is to replicate end
user activity.
As most of these composite applications
are web-based, this at first appears to be
a simple task; as monitoring the
availability of a web page is reasonably
straightforward. However, it usually
requires a transaction involving several
web pages including tasks such as
“logging in” to involve all the components
in an application. This being the case, any
monitoring solution needs the ability to
replicate such complex behaviour.
Application InstrumentationWhen such monitoring finds an
application down or, just as seriously and
just as likely, finds a significant
performance problem, the next step is to
identify the cause. The traditional
approach would be to consult the relevant
specialists, for example the DBAs. The
problem with this is that, whilst the root
cause of a poorly-performing application
may reside in the database, if this is due
to a poorly written query then the
database itself will appear healthy to the
DBA. Even if a monitor were tracking
transaction times within the database, an
average over all the transactions in the
database may still appear acceptable.
The solution then must be to track each
individual transaction, capturing
information about those that appear
problematic. Until recently, the only way
to do this would be to instrument the
various elements of the application by
modifying the source code and placing API
calls to enable such transaction tracking.
This overhead is manageable for new
applications being developed, but altering
legacy applications is almost certainly
unacceptable and for third party
applications such changes are impossible.
Now though, most applications have a
J2EE application server at their centre.
This has the benefit of providing a central
hub where the transaction monitoring can
be performed and uses an architecture
that simplifies the required application
instrumentation. As Java applications are
compiled into byte-code rather then native
binaries it is possible to automatically
instrument them; either by processing this
byte-code prior to installation on the
Application Server or, preferably,
dynamically adding the instrumentation
within the Application Server as the
application is run.
Since one of the major benefits of J2EE is
the APIs it provides, such as JDBC or the
Java Message Service (JMS), a transaction
monitor can identify calls to these APIs
and so identify the separate components
of the transaction; such as individual SQL
queries.
IBM Tivoli CompositeApplication MonitoringIBM’s solution to these problems is the
IBM Tivoli Composite Application
Management (ITCAM) suite of products.
The ITCAM family of products provide
genuine integration, with each able to
connect to the others in context, allowing
problem identification and diagnosis to
take place quickly, easily utilising all of
the management information available.
Message Broker No.5 Autumn 2006
7
The information from all products is
available in a single view in the Tivoli
Enterprise Portal, which can also be used
to launch the ITCAM products in context.
ITCAM for WebSphere provides real-time
problem detection, analysis and repair for
applications running on WebSphere
Application Server and provides
transaction correlation spanning J2EE,
RDBMS, CICS and IMS. It provides
operations and developers with
comprehensive, deep-dive analysis to
identify the root cause effecting
application slowdown.
ITCAM for Response Time Tracking (RTT)
is an end-to-end transaction management
solution which monitors end user
response time and analyses transaction
performance using both robotic and real-
time techniques. It allows easy
identification of performance bottlenecks
in applications spanning J2EE application
servers, databases and mainframe
systems.
ITCAM for Service Oriented Architectures
(SOA) monitors, manages and controls the
Web services layer of IT architectures and
allows users to drill down to the
application or resource layer for quick
identification of the source of performance
bottlenecks or application failures. ITCAM
for SOA gives both clear status views and
enterprise-wide end-to-end views of web
services and provides reports on service
level fulfilment.
An ExampleThe diagram below shows an example
composite application and how it relates
to the ITCAM family of products. The
diagram is a rough approximation and
should not be used to infer functionality.
For example, whilst no component of
ITCAM for WebSphere resides on a
database server it can still analyse
database transaction times by tracking
calls to the JDBC API.
Please send an email [email protected] if youwould like to take advantage of ourweb-site evaluation service, whichincludes free of charge, non-intrusivemonitoring of your external site.
We Have MovedAs part of Orb Data's expansion plans, the
company has relocated to The Chapel, in
the grounds of historic Grenville Court in
Burnham, Buckinghamshire; a move which
provides the company with four times the
office space available at our previous
premises.
Included in this space are dedicated
training facilities allowing Orb Data to
offer training courses and seminars from
its own offices.
"This is an exciting time in the life of Orb
Data and acquiring our new premises in
Burnham will aid our
growth over the coming
years." said Nigel Brown
Managing Director,
"The Chapel provides
an elegant and stylish
setting for our staff to
work and customers to
enjoy."
Our new address is:
The Chapel
Grenville Court
Britwell Road
Burnham
Bucks
SL1 8DF
United Kingdom
Tel : +44 (0) 1628 550450
Fax : +44 (0) 1628 550451
8
Message Broker No.5 Autumn 2006
Using Odyssey to Deliver with ITM 6by Ben Fawcett
ChallengesIBM Tivoli Monitoring version 6.1 hasbeen with us for over a year now.Many new and old Tivoli customershave taken the plunge to transfer tothe new technology and try to makethe promises of simplified monitoringa reality.
There have been a number of teething
problems with the migration and
upscaling from the Omegamon product to
the ITM offering. We have seen 3 fix packs
so far and these have included a mix of
bug fixes, limitation improvements and a
number of new features making their way
into the product. Some have been under
demand from customers whilst others are
welcome enhancements to address
usability.
So how do you make ITM 6 work for you?
This article looks at how the latest release
of Odyssey 3.1 provides a number of
features to deliver the best from any ITM 6
implementation.
Scalable DeploymentOur experiences with ITM 6.1
deployments, particularly in large-scale
environments, have shown time and again
the value of an underlying management
infrastructure.
For many of our customers, who have a
history with IBM Tivoli products, this
means leveraging the existing Tivoli
Framework implementation to manage
each phase of the ITM 6.1 solution
delivery. This includes scoping, pre-
requisite checking, preparation, roll out
and configuration during the
implementation. It also includes managing
the ongoing maintenance, improvement
and extension of the service.
To successfully scale your ITM 6.1
deployment requires that each of the
implementation phases is well defined,
consistent and repeatable. To successfully
deliver the service requires clean
interfaces, a robust infrastructure and
clear integration with other systems
management services.
Integrated EnvironmentsOdyssey 3.1 already harnesses the Tivoli
infrastructure to provide clean, usable and
scalable interfaces to support and deliver
Tivoli services to Operators. Odyssey
builds on this strength by providing a
unifying link between ITM 6.1 Agents and
their Tivoli Endpoints. This immediately
provides a consolidated view of the new
availability world alongside other Tivoli
services, and an outstanding level of reach
for ITM 6.1 agent management and
configuration.
Agent ManagementUsing the power of the Odyssey Wizard
processing engine and with actions and
status checks bundled out of the
box, taking control of your ITM 6
estate has never been easier.
The ITM 6 Agent Wizard allows
management of ITM 6 Agent
resources. Odyssey is able to
interact with the agents and their
host system in a number of ways. It
can query their status, perform
configuration operations, execute
commands against the underlying
system, retrieve monitoring metrics
that have been collected by the
agent and publish reports of the
monitored data.
Using Odyssey it is possible to
perform actions like :
• Agent status healthcheck
• Agent configuration
• Monitored data snapshot and analysis
• Automated ITM 6 reporting
• Agent audit
• Agent roll-out
Remote AccessOdyssey already provides one-click launch
of remote command windows on your
Tivoli Endpoints. This has been extended
to include ITM 6 Agent systems. This
vastly simplifies the troubleshooting and
analysis process and greatly reduces the
administrative overhead for operators.
Right click on an Agent or ITM 6 System
and you also have direct access to log and
configuration files.
Message Broker No.5 Autumn 2006
9
Scheduled Publishing ofITM 6 ReportsOdyssey is able to present your ITM 6 data
alongside other information from your
Tivoli environment. Side by side views of
monitoring metrics, inventory data, TEC
events, software distributions and more.
You can even display live logfiles from the
agent system and open a remote
command window automatically when you
browse to the workspace.
Any views that you create within the
Odyssey ITM 6 Operations Console can be
published as HTML reports available
through Odyssey's built in web portal.
Reporting is scheduled and controlled
through the Odyssey ITM 6 Wizard,
providing great flexibility on how and
when data is published.
For further information about Odysseyvisit http:/www.orb-data.com/odyssey orsend an email to [email protected].
Rent Odyssey Now!To complement our standard licensingagreement, Orb Data is now makingOdyssey available to customers on amonthly rental basis. In an uncertainlandscape of technological change, butwith ongoing service demands, you cannow immediately reduce costs withoutthe need to justify a formal capitalexpenditure. For more information pleasecontact [email protected].
Message Broker No.5 Autumn 2006
10
cannot be opened or saved in a graphic
editor. In the example at the bottom left of
the page we are using a pre-supplied .IVL
file called united_kingdom_ireland and a
default style sheet called default.css that
generates the items displayed on the
map; server images in this case. More of
these later.
A second example, and one that we are
finding more popular amongst the
customers we are visiting, is a
representation of services as logical units.
Typically this would be a Business Service
View which could represent services that
are made up of many applications and
physical servers. For example, a Bank may
want an object to display the service for
Internet Banking. In other words “Can our
customers use Internet banking or not?”
In the example at the top of the page, we
have used an image called orb-data.jpg as
the background. This could be any image
but would typically be quite simple so as
to not interfere with the service views.
As said previously, the style of the view is
defined using Cascading Styles sheets or
Most users of Tivoli Monitoring toolshave become accustomed to viewingsimplistic event lists rather than thegraphical event statuses availablewith some other vendor’s SystemsManagement tools. However, with theintroduction of IBM Tivoli Monitoring(ITM) 6.1 customers can now displaylogical views that, with a little work,can be used to display key BusinessServices.
This is something that most customers
have wanted but have often been loathe
to make the investment needed to buy
IBM’s TBSM or Managed Object's Formula
products because the functionality
delivered by these two applications is way
above that required. However, in many
installations we have been involved with,
we have found that a simple service status
view is all that is needed.
Logical ViewsThere are two types of views that can be
used:
1. Logical Units displayed on a map
2. Logical Items displayed on an
Image/Blank screen
For example, if you have a branch network
or regional sites, you might want to
display these on a map. ITM 6.1 uses a
geo-referenced map which is a special
type of graphic that has built-in
knowledge of latitude and longitude and
can be zoomed into and out of quickly.
The Portal uses proprietary .IVL files that
IBM Tivoli Monitoring as a BusinessServices Viewer by Simon Barnes
Message Broker No.5 Autumn 2006
11
.css files. These external style sheets
enable you to change the appearance and
layout of all pages by editing one single
CSS document. The style sheet used here
(OrbData_Top_Level.css) defines two
objects; firstly the rectangular box and
secondly the small circular icon in the
corner. This is a monitor to show that the
infrastructure that creates these views is
working (i.e. TEP Server, TEMS, and TEC).
Once these views have been defined,
there are two simple steps (see above) to
make them reflect the situations and
systems you want:
Step 1: From the logical item properties
window assign the system or managed
system lists associated with the service
you want to monitor.
Step 2: From the logical item situations
choose the specific situations you have
chosen to reflect the service’s status and
associate it with the logical item.
Linking to a Sub-ViewEach view that you create can also be
linked to sub-views so that you can click
on a service and be shown a view of the
sub-services. For example, Internet
Banking might be linked to a view
showing the infrastructure that makes up
this service and this could be displayed on
the desktops of the support team that
deals with this service.
This is performed by right clicking on the
item in the Graphic View, selecting “Link
to -> Link Wizard” and following the
wizard instructions.
Is it really that simple?Well not quite. If you are only using ITM
6.1 and you want every event to be
reflected in the view, then these simple
displays will be enough. However, there
are two circumstances where more work
will be needed:
1. Legacy Events – Events that are
delivered through the Tivoli Enterprise
Console (TEC) (or any 3rd party tool) such
as NetView will not update the TEP and
will therefore not be displayed as part of
the Logical views.
2. Service Logic – Not all events will cause
a service to be unavailable and therefore
merely assigning a situation (an event) to
a logical item will not give a clear picture
as to whether that service is up or down.
Legacy TEC EventsITM 6.1 logical views provide no route for
incoming TEC events to impact icons,
views and situations in the TEP. As events
from items such as NetView and adapters
must feed the infrastructure-base service
views, a mechanism is required to convert
them to ITM 6.1 Situations and thus drive
the icon changes. An icon in the TEP is
changed by the following process:
1. A Situation status changes, e.g. a
Situation is raised
2. The icon is checked to see if the
ManagedSystem linked to the Situation is
assigned to it
3. The icon is checked to see if this
Situation is associated with it
4. The Situation status is reflected in the
icon
Only 2 criteria of a situation determine
which icons it can affect – Managed
System and Situation Name.
To extract TEC events into an ITM 6.1
Managed System Attribute Group, we have
defined a mechanism using an ODBC data
provider Universal Agent to extract data.
This is represented in the TEP as an
Attribute Group of a Managed System as
defined by the Universal Agent metafile.
This creates a single Managed System
with all the data in it. We then create
multiple situations (at least one per icon)
to drive multiple icons.
continued on p12
Message Broker No.5 Autumn 2006
12
Using Logic to change iconsThere are cases where, although an alert
has fired, you might not want a service to
be marked as down. For example, if are
monitoring if the Web Servers service is
up (green) or down (red) we probably
don’t want a 100% CPU alert to be shown
as causing the Web Servers service to be
down? You want to know it is happening
but it does not cause the service to be
down. ITM 6.x allows for this to be
achieved without using TEC rules by using
Situation Comparisons and Correlate
Situations Across Managed Systems.
Situation ComparisonsA Situation Comparison is a specific type
of situation that compares the results of
other situations that have been
distributed to one or more systems. All of
the situations that are compared must be
distributed to all systems.
For example, we could create a situation
called WebServers that would check if any
of MS_Offline, UDB_Status_Warning, and
NT_Paging_File_Critical is true. In other
words it is an OR comparison.
Whereas in this example, the situation will
fire if MS_Offline, UDB_Status_Warning,
and NT_Paging_File_Critical all fire
together. In other words it is an AND
comparison.
These situation comparisons will need to
have intervals set and then be distributed
to be active. If you try and compare a
situation that has not been distributed to
all the systems that the Situation
Comparison is sent to then these will be
distributed automatically.
Correlate Situations AcrossManaged ServersCorrelate situations across Managed
Servers will compare a single situation
across multiple systems. For example, if
we wanted to create a situation that will
check whether the WebServers situation
has fired on more than one system, we
would use Correlate situations across
Managed Systems. This is particularly
useful when you are using load balanced
servers or when a single server failure
does not cause the service to be down.
To do this we must assign a system
against the logical item we are using (e.g.
Web Servers) and then create the
situation against the logical item. In this
case, associating a previously created
situation will not work.
Like the idea?In summary, ITM 6.1 does offer a new
facility that, with a little work, can provide
a feature that has been missing from the
core IBM availability options to this point.
If you would like to discover more aboutthis facility or take advantage of OrbData’s experience in this area to createyou the views that you want, then contactus at 01628 550465 or by email [email protected].
If you would like to see ITM 6.1using service views then give usa call and we will try to arrangea visit to one of our customers.
Ring +44(0) 1628 550450
or send an email [email protected]
Message Broker No.5 Autumn 2006
13
It’s a strange question but can buyingmore software really save you money onyour overall software costs? Well, yes;provided that your next softwarepurchase is Tivoli License ComplianceManager (ITLCM).
Many organisations, particularly those
with large and complex software estates,
struggle to identify and retain accurate
license usage records resulting in poor
procurement practice such as site-
licensing or maximum-concurrency
licensing. And the multitude of software
licensing policies employed by vendors
can make software tracking and license
compliance difficult to enforce.
This is why the University of Liverpool
(www.liv.ac.uk) approached us for help.
The University has 20,000 students
accessing hundreds of applications across
a diverse range of locations and hardware
platforms and believed that they could
reduce their licensing costs if only they
had usage data available prior to contract
negotiations. Having operated an
extensive proof of concept on-site, the
University recently purchased ITLCM and
is looking forward to reducing the costs of
both maintenance renewals and future
software purchases.
Steve Aldridge, Systems Manager at the
University explains. "License management
is another area of IT which is very
challenging in the Higher Education
environment. ITLCM, as a point solution
within the strategic Tivoli Enterprise
Systems Management brand, offers a
comprehensive approach that can deal
with the complexities we face. Once again,
Tivoli has given us the flexibility and ease
of deployment that we need."
With advanced software inventory, usage
monitoring and reporting capabilities,
businesses can identify exactly what
software licenses are installed, which are
being used and which are actually
required. This information also assists in
proving compliance during business
audits conducted by vendors, the
Federation Against Software Theft (FAST)
or other regulatory bodies.
So what can License Managerdo for you?Take the example of a company that
needs to upgrade a piece of existing
software. If the software is licensed by
concurrent users they would have to pay
for the number of systems that it has
been installed upon. This is because the
IT department would not be able to
control how many licenses were
concurrently in use and therefore they
would be unable to prove that the
number of used licenses is less than the
number deployed. After purchasing ITLCM
they can run reports that show the
maximum number of users at any one
time and, more importantly, prove this to
the vendor.
Gartner (www.gartner.com) states that
“companies can expect to achieve 30%
cost savings in the first year and 5-10%
annually with an effective Software Asset
Management program”.
This translates to an ROI typically
available in less than 12 months.
How can buying new software reduceyour licensing costs? by Marie Broxholme
Advantages and Benefits of ITLCMWeb-based architecture – ITLCM is based on IBM WebSphere giving complete, securelicense management through a simple Web-browser interface; regardless of location.ITLCM also provides users with role-based, decentralised, thin-client administration toolsfor optimum flexibility.
Software Lifecycle Management – You can link software installation and license use toproduct entitlements and contracts. This enables easy reconciliation of how muchsoftware is deployed and how much is actually used; making for better-informed licenseprocurement activities.
Self Updating Agent – There is no need to go from desktop to desktop manually updatingeach user’s license instructions or agents. Intelligent self-update agents are updatedautomatically when newer agent versions or updated licensing information becomeavailable.
Extensive License Model Support – Able to manage several types of software licensingmodels including CPU-based, user-based, machine-based, concurrent-use, multi-use,sub-capacity use and more. Tracks software deployment and use according to the specificlicensing model subscribed; key to effective license management.
Where do these savingscome from?
35% Removal of Unnecessary Software
22% Invoice Reconciliation
11% Avoid Compliance Penalties
11% Productivity Improvement
6% Tax Savings
5% Optimised CPU Upgrades
4% Better Vendor Negotiation
3% Avoid Redundant Product Evaluation
3% Avoid Unnecessary Purchases
If you are interested in finding out howIBM Tivoli License Manager can help yourbusiness, please visit our web-site atwww.orb-data.com, or email us [email protected].
Message Broker No.5 Autumn 2006
14
Building data flows with IBM TivoliDirectory Integrator by Colin Miles
Despite its name, IBM Tivoli DirectoryIntegrator (ITDI) has uses far beyondthe synchronisation of data betweendirectories.
In fact, ITDI is an extremely useful and
flexible tool that helps in the rapid
development of data flows between pretty
much any repository of data – be these
files, directories, databases or
applications. Where traditionally there
would be a requirement for custom coding
to develop the required interfaces
between systems, ITDI helps the user to
build the communication and data
transformation paths quickly by using a
graphical editor to assemble pre-built
components and then define the
processing logic that binds the process
together.
How does ITDI work?
ITDI simplifies and accelerates the process
of building integrated data flows by
defining and linking together the required
components into an end-to-end process
definition termed an Assembly Line.
Development of ITDI Assembly Lines is
carried out using the Config Editor (an
Integrated Development Environment).
The Config Editor provides a graphical
interface for bringing together all the
required components and quickly
modelling the required throughput and
transformations of data. Creating and
deploying an Assembly Line with ITDI
typically involves completion of the
following tasks:
• Identify all the systems and properties
involved in the data flow and deploy
connectors to import or export data in
the required format.
• If required, parse and transform the data
into the required format(s) and define
the link criteria for associating data
between systems.
• Detail processing logic by exploiting
numerous hook points and provide
coding at desired points in the Assembly
Line execution (via simple JavaScript).
• Make provision for the events that define
when and how data flows should be
initiated, or detail updates that are to
occur in a batch mode.
• Test and debug the Assembly Line
interactively.
• Package the Assembly Line into a
standalone component (i.e. daemon or
service) – allowing the configuration to
be shared and reused across the
infrastructure.
• Allow for the remote administration and
monitoring of Assembly Lines from any
location.
ITDI connectors
A key part of the value of ITDI is that it
delivers a standard set of connectors that
define the required transport mechanisms
and structure for communicating with a
range of common data repositories. These
connectors allow interfaces to common
repositories to be deployed “out of the
box” by simply allocating the appropriate
connector to your configuration.
For applications or systems not covered by
any of the standard connectors or
functions provided, ITDI allows for
extensibility via a fully featured Java API.
This allows communication with just about
any system or repository to be
implemented as well as catering for the
development of bespoke functions to be
executed in conjunction with the Assembly
Line.
Message Broker No.5 Autumn 2006
15
ITDI example usage
Whilst the potential applications for ITDI
are vast, some examples may include the
following:
• Ensuring data consistency by detecting
changes to authoritative sources of data
in one location (i.e. additions,
modification or deletions) and
propagating this information to all other
relevant systems.
Example ITDI connectors
• Active Directory
• Exchange
• FTP
• File system
• JDBC
• JMS
• JNDI
• LDAP
• Domino/ Lotus Notes
• RDBMS changelog (Oracle / DB2)
• DSMLv2
• XML
Issue 2• Managing Distribution Operations
• Linking ESM functions to businessvalue
• ITIL and best practice – learningfrom others mistakes
• News in a Minute
• Security – Why wait to comply?
• Increasing the reliability of yourTivoli Environment?
• Technical Corner: Using Tracing inResource Models
Issue 3• Creating an Inventory Schedule in
30 minutes
• News in a Minute
• Is a Configuration ManagementDatabase important to yourorganisation?
• Role mining: a quick route to RoleBased Access Control
• IBM Tivoli Education – News
• Tips for writing Resource Modelsusing VBA Script
Issue 4• An Introduction to Federated
Identity Management
• News in a Minute
• The Odyssey SOAP Interface forITM 6.1
• Accelerating ITIL Best PracticeAdoption
• Managed Objects - Overview
• Improving IT Service Delivery inLocal Government
• Viewing Resource Models in TivoliEnterprise Portal
Issue 1• Companies Spend to Solve the
Identity Conundrum
• ‘Just in Case’ Computing
• News in a Minute
• Data, data everywhere - Orb DataReporting Application
• Hotfix Deployment Comes of Age
• Technical Corner: Rules Based onTime
Did you miss a previous issue?
• Enabling the migration of data from
legacy systems.
• Automatic transformation of data files
from one format to another.
• Extending the scope of existing systems
by allowing the rapid deployment of new
interfaces.
• Integrating geographically remote
systems via web services.
• Enabling password synchronisation
between multiple systems.
ITDI and Identity Management
One of the key areas of usage for ITDI is in
the field of Identity Management. This is
partly because organisations traditionally
find that data relating to their employees,
partners and suppliers is often distributed
around the infrastructure with significant
scope for inaccuracies and inconsistencies
to result. By deploying the appropriate
ITDI Assembly Lines data can be combined
and synchronised across all repositories in
real time – thereby improving data quality
and easing the administrative burden.
ITDI also forms an integral part of the IBM
Tivoli Identity Management portfolio –
where it can be used in the deployment of
HR based identity feeds and the
development of custom adapters to legacy
systems for the automation of user
administration tasks.
For more information on IBM TivoliDirectory Integrator refer to the linkssuggested below, or contact Colin Miles,who will be happy to discuss potentialscenarios in your environment.Tel: 01628 550475Email: [email protected]
Further Reading
ITDI Getting started guide:http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.IBMDI.doc_6.1/gettingstarted.pdf
ITDI Users Group:http://www.tdi-users.org
ITDI News Group:news://news.software.ibm.com/ibm.software.network.directory-integrator
Don’t worry, you can download them here: http://www.orb-data.com/messagebroker
16
IBM Tivoli Monitoring v6.x and (atleast) Fixpack 2 introduces a featurethat allows safe traversal of multiplefirewalls.
This is an XML based solution for
configuring Tivoli Enterprise Monitoring
Agents (TEMAs) to act as proxy TEMS, up
relays or down relay components.
The configuration is performed through
XML files that reside locally on a TEMA. To
start using this feature, modify the TEMA’s
local configuration file and add the
location of the gateway.xml file in the
format KDE_GATEWAY=filename. This xml
file contains all the configuration details
for the agent.
In our example this was
Sample Firewall Environment
In the sample firewall environment above,
we have 3 firewall zones; white, green and
amber. However, this can be extended to
work over multiple hops and, so far, has
been tested up to 12.
In this configuration the connections are
always initiated from the most secure side
outwards. Therefore the initiation is
always initiated from white to green and
then from green to amber.
TEMA orbdatawhite1
In the white zone there is a Remote TEMS
called orbdatawhite1. This system also
has a TEMA which acts as a client proxy
and a down relay through the firewall into
the green zone. All the TEMAs in the white
zone will point to a TEMS as normal
<tep:gatewayxmlns:tep="http://xml.schemas.ibm.com/tivoli/tep/kde/" name="orbdatawhite1" >
<zone name="white"><interface name="clientproxy" ipversion="4" role="proxy">
<bind localport="poolhub" service="tems">
<connection remoteport="1918">10.1.1.10</connection>
</bind> <bind localport="poolwhp" service="whp">
<connection remoteport="6014">10.1.1.111</connection>
</bind><interface name="downrelay_green1"ipversion="4"role="connect"> <bind localport="10021">10.1.1.10
<connection remoteport="10021">11.1.1.10</connection>
</bind> </interface> </interface>
</zone> <portpool name="poolhub">20000-20099</portpool> <portpool name="poolwhp">20100-20199</portpool> </tep:gateway>
Using ITM6x Firewall Gateway Feature by Jason Forsyth
Unix : $CANDLEHOME/config/ux.ini$CANDLEHOME/config/ux.config
Windows : %CANDLEHOME%\tmaitm6\KNTENV
Linux : $CANDLEHOME/config/lz.ini$CANDLEHOME/config/lz.config
ux.ini:KDE_GATEWAY=/usr/tivoli/IBM/ITM/FIREWALL/gateway.xml
ux.config:KDE_GATEWAY=’usr/Tivoli/IBM/ITM/FIREWALL/gateway.xml’
TEMA orbdatagreen1In the green zone there is a TEMA called
orbdatagreen1. This system acts as an
uprelay to the white zone, a downrelay
into the amber zone as well as a TEMS
server proxy. All the TEMAs in the green
zone will be configured to talk to the
server proxy on orbdatagreen1. These
TEMAs will appear in the TEP as normal.
TEMA orbdataamber1In the amber zone there is a TEMA called
orbdataamber1. This system acts as an
uprelay to the green zone and a TEMS
server proxy. All the TEMAs in the amber
zone will be configured to talk to the
server proxy on orbdataamber1. These
TEMAs will appear in the TEP as normal.
For a fuller explanation of this feature see
http://www.orb-data.com/ITM61firewalls
<tep:gatewayxmlns:tep="http://xml.schemas.ibm.com/tivoli/tep/kde/" name="orbdatagreen1">
<zone name="green"><interface name="uprelay_white1"ipversion="4" role="listen">
<bind localport="10021">11.1.1.10<connection remoteport="10021">10.1.1.10</connection>
</bind><interface name="downrelay_amber1" ipversion="4" role="connect">
<bind localport="10022"> 11.1.1.10<connection remoteport="10022">12.1.1.10</connection>
</bind></interface><interface name="serverproxy" ipversion="4" role="proxy">
<bind localport="1918" service="tems"/><bind localport="6014" service="whp"/>
</interface></interface>
</zone></tep:gateway>
<tep:gatewayxmlns:tep="http://xml.schemas.ibm.com/tivoli/tep/kde/"name="orbdataamber1">
<zone name="amber"><interface name="uprelay_green1" ipversion="4" role="listen">
<bind localport="10022">12.1.1.10<connection remoteport="10022">11.1.1.10</connection>
</bind><interface name="serverproxy" ipversion="4" role="proxy">
<bind localport="1918" service="tems"/><bind localport="6014" service="whp"/>
</interface></interface>
</zone></tep:gateway>
Published by Orb Data Limited, The Chapel, Grenville Court, Britwell Road, Burnham, Bucks, SL1 8DF Telephone: +44 (0) 1628 550450IBM and Tivoli are trademarks of International Business Machines Corporation in the United States, other countries, or both.
Message Broker No.5 Autumn 2006