white paper - eai success factors
TRANSCRIPT
universal solutions
white paper Enterprise Application Integration Success Factors
Paul Wagner
September 2002
Contents
Introduction ......................................................... 3 EAI and the architectural challenge ....................... 3 Part 1: Understanding the architectural challenge ............................................................. 3 Integration means complexity................................ 3 The devils you don’t know ..................................... 4 The “right way” ...................................................... 4 The management challenge .................................. 4 Packages and integration...................................... 5 Where are the standards?..................................... 6 Architectural philosophy ........................................ 6 Maturity of components ......................................... 7 Granularity ............................................................ 7 Meeting the challenges ......................................... 8 Part 2 – Developing an Enterprise Integration Architecture...................................... 8 Adopt an evolutionary approach ............................ 8 Keep it simple ....................................................... 8 Understand the goals ............................................ 9 The information model........................................... 9 Business objects ................................................. 10 Identifiers ............................................................ 10 Operations .......................................................... 11 Events................................................................. 12 Service oriented architecture............................... 12 Process driven integration................................... 13 The business view............................................... 13 Summary of principles......................................... 14 Part 3 – EAI Implementation Guidelines .......... 15 Message bus....................................................... 15 What loose coupling really means ....................... 15 Loose coupling and technologies ........................ 16 Adapters ............................................................. 17 Adapter implementation guidelines...................... 18 Messaging considerations................................... 19 Introduction to EAI business patterns .................. 21 The “aggregate enterprise state” pattern ............. 21 The “synchronise enterprise state” pattern .......... 22 The “perform enterprise task” pattern .................. 23 The “perform and monitor enterprise process” pattern .................................................. 24 The “process multiple objects” pattern................. 24 Conclusions ...................................................... 25
Enterprise Application Integration Success Factors Paul Wagner
EAI Success Factors
www.eservglobal.com – [email protected] page 3 | 25
Introduction Industry analysts, Ovum, provide this insight into business
survival for the twenty-first century:
“Wherever you look in the business context, the
dominant trend is to integrate business processes
more tightly within organisations and across supply
chains. This type of business integration relies on
application integration.” 1
This paper examines the significance of application
integration together with its business and commercial
considerations. Given this perspective, the first key message
is:
� System integration must be commercially justified and hence must be directly geared toward business outcomes.
The process of application integration must deliver solutions
that are directly geared to enterprise business goals. Large-
scale systems integration is both complex and costly. There
is no quick and easy fix, yet somehow solutions must cater
for business environments that are progressively more
dynamic. Uncertainty over future business direction in the
competitive global environment, due to merger, acquisition,
competitive pressure, technical innovation to name a few, is
an omni prescient assumption that influences our thinking on
integration design and architecture.
Increasingly business outcomes are reliant on information
technology driven solutions. Yet planning cycles in all areas
of business are so short that there is a tendency to steer
clear of projects that involve a complete software
development life cycle. For it is the ability to embrace
change, even sudden change, that is perhaps the single
most important attribute of any modern business system.
Any effective integration architecture must be able to cope
with constant change. From the technological perspective
these business goals require flexibility and adaptability in the
solution landscape.
EAI and the architectural challenge To address these integration challenges, a model that has
evolved for large-scale systems integration is Enterprise
Application Integration (EAI). Although concepts of EAI are
well understood (and well documented elsewhere) there are
no actual standards. Rather, EAI is a strategy based around
a few key concepts and some generic tools. This has
allowed vendors, aided by industry analysts, to hop on the
1 Enterprise Application Integration: Making the Right Connections (Katy Ring, Neil Ward-Dutton) page 21
EAI marketing bandwagon. Almost all packaged tool sets
labelled as EAI contain core components that are not new,
and most tool sets are poorly integrated.
� Contemporary integration strategies are based on EAI but any solution still faces significant architectural challenges
The thrust of this paper is that EAI technology is a starting
point only and it is the development of an enterprise
integration architecture – a process that will require
investment, nurturing, and ongoing support and
maintenance – that is the key to unlocking the claimed
benefits of EAI.
The enterprise integration architecture must represent a
merger of three key components: the business information
model, the IT strategy, and the corporate culture. It is a
significant undertaking and the purpose of this paper is to
explore issues within this area, and where possible, offer
guidance and practical tips.
Firstly we start by looking at large scale systems integration
from a number of vantage points in order to gain a
perspective on issues that confront integration projects at
outset.
Part 1: Understanding the architectural challenge In this section we examine the nature of challenges to be
faced when tackling enterprise level integration on a large
scale. Key messages for IT managers and strategic
architects are highlighted in bold type. In Part 2 we develop
a set of principles to address these challenges and guide
architectural development, whilst in Part 3 we put it all
together in an examination of EAI and its implementation
concerns.
Integration means complexity Just hearing the words “large scale systems integration
project” is enough to send a shiver down the back of your
average CIO. Just why is it that so many systems integration
projects take longer, cost more, and deliver less than they
promise? The short answer is simple: complexity!
If it were not complex we would do it easy – right? We just
gather the requirements, do some analysis and design, we
follow the methodology, the tools assist us, and we deliver
on time and on budget. Except, as we all know, it rarely if
ever happens like that. Many IT projects don’t happen like
that even where the problem really is relatively simple! Sure
we have made progress over the years but large-scale
EAI Success Factors
www.eservglobal.com – [email protected] page 4 | 25
systems integration is a hard problem – real hard. Don’t be
fooled otherwise.
� Systems integration involves complexity across a number of dimensions, making it very difficult to manage
The first thing to remember then is that you will need to
manage complexity – concepts, architectures,
technologies, and people!
The devils you don’t know Consider the IT landscape involved.
Several applications might be in scope. Often these are
large packages (such as ERP, CRM, Billing, etc) “owned” by
different parts of the business. There is some “glue” stuff we
call middleware. EAI typically relies on messaging
middleware, plus some extra stuff to connect the middleware
glue to each application. EAI calls these adapters. There are
tools to help deliver complete integration scenarios. With
EAI, these additional tools are usually workflow engines, and
message transformation tools, but traditional and more
advanced middleware may include a Transaction Processing
Manager (TPM). Then there will be other tools that assist
development, deployment, operational management, as well
as runtime support. All up we are looking at a lot of different
technologies with lots of hidden knobs and dials. There are
many different ways to glue and stick software together, with
advantages, disadvantages, impacts, dependencies and
details galore.
But there is more. You may have a corporate security policy
that demands that everything must be nailed down. This
could involve encryption, non-repudiation, and full audit
capability. There might be angst over using 2-phase commit
to guarantee every update, even when most of the
applications cannot support this capability. And of course
everybody assumes that the result will be fully Web-enabled
– using Web Services and ready for e-business. Finally,
since we are talking large scale and business critical it better
perform real well. And it better be highly scalable… and fault
tolerant… oh, and manageable, and… well, the list just goes
on.
� You must manage multiple business areas, multiple technology domains, pompous subject matter experts, entrenched opinionated views and, of course, political agendas
Let’s face it. No one person can really get his or her head
around all of this stuff. Yet you want to do this project the
“right way”. So what is this, this “right way”? There are lots
of opinions but equally a long list of unknowns. Over the
course of the project many details will emerge that were
unknown at the outset. Understanding that this will happen
means adopting a layered management approach, where
details at one level – even issues that appear out of control
– do not ripple through and affect all layers of the project.
The “right way” Doing things the “right way” is not purely a technical
consideration. Few will argue against the notion that the
right way is the one that will maximise commercial
advantage. Making sure that everyone understands this is
critical. There will inevitably be many trade-offs and this
represents a clear principle for decision-making. Whilst it will
not necessarily make decisions easy, as links between
technical and commercial outcomes are difficult to identify,
quantify and justify, it will provide a clear guideline when
faced with an impasse.
� There are so many options that the only right way is the one that makes commercial sense
Make sure you document the key business drivers and
their linkage to the integration requirements. Plan multiple
delivery phases where each one provides some new
business function and some new or refactored infrastructure.
Don’t be afraid to refactor as you learn more about the
architecture, but avoid deliveries that are purely technical
and cannot be justified by the functionality that they deliver
for the business.
The management challenge Management buy in is a must.
� Big projects, big budgets, and ambitious deadlines, combined with a long list of unknowns spell just one thing …big risks
The overall budget is likely to be large, so that steering a
course through intersecting agendas will need a skilful
artisan. Constructing a compelling strategy in the absence of
a firm ROI for a potentially large capital outlay is no simple
matter. It will inevitably become easy to be wise after the
event, yet developing a traditional business case in advance
may be near impossible. Why? Two main reasons: one
being that you will have to base estimates on a number of
assumptions and environmental parameters that are poorly
understood at the outset; and the other more insidious one
EAI Success Factors
www.eservglobal.com – [email protected] page 5 | 25
is that the business will keep changing during the course
of the project.
As you learn you will have to adapt to the problems you
encounter, and modify designs based on your growing
experience and technical insights. However, realise that no
matter how many times you rework and refine it will never be
perfect. Clearly you will need some really smart folk, some
neat EAI technology, and good project management, as part
of an overall team effort, but if the key people have never
done this before then industry experience suggests that the
chances are you will fail.
A project with a big budget is a magnet for new requirements
– also known as scope creep. You know, “while your doing
all this stuff you may as well just do this tiny change too
please”. And yes, it can be hard to say no when people
seem to make such irritatingly perfect sense! “No” may not
always be the “right” answer – it may be, as technical and
business details evolve, commercially advantageous to
modify your plans (but not your strategy) as you go. Be
prepared for this and set up some process for quick re-
evaluation and commitment.
Naturally you will have to manage to a date. There’s always
a date at the outset to make the “window of opportunity”,
affect the “sales pipeline”, go to “commercial launch”, ensure
“time to market”, gain a “competitive advantage”, or
whatever.
With all these considerations it is often the case that the
project starts before the real problem is understood. There is
lots of money, lots of stakeholders, lots of experts (that may
not agree), and a delivery date, but very little actual detail.
The sole comfort probably comes from a risk management
plan based around a few glaring unknowns.
Packages and integration Now for the real catch.
Although there will undoubtedly be much excited talk about
the EAI technology, the project schedule and costs, the
launch date, and other project issues, it often turns out at
this stage that real nature of the business problem is not
well understood. It somehow sounds simple, as though we
intuitively know exactly what we must be done. What with all
the leading edge integration technology the EAI tools appear
fascinating and complex, yet in many ways they are the easy
bit. They are there and are known.
� The integration technology itself is not the problem, but rather the information model.
Indeed, the business problem that often looks easy up front
soon emerges as the proverbial tip of the iceberg. It might
be simple in concept, but as we all know (to risk a further
proverbial analogy) that the devil is in the detail. The
problem that will often emerge revolves around conflicts in
the information model. Each application package
represents a different facet of this problem.
Typically, each application package employs its own
architecture, embedded technology, business model, and
processing logic. Systems like CRM, ERP and other large
packages assume that they are the processing focal point to
which other IT systems must defer. They assume they hold
the enterprise view. They cannot be bent out of shape (well,
they can, but this is type of customisation costs a lot of
money and causes future support headaches, so as a rule
we don’t do that!).
.
There are no real standards when it comes to applications. That would break the marketing maxim whereby products must be differentiated. And indeed, proprietary technology builds value in software companies.
However, a technology that can saturate the market, no matter how “proprietary” it is – meaning marketed under and protected by a single commercial entity – can become a de facto “standard”, a yardstick if you like (consider the likes of MS Word).
Where such a technology is accepted as a standard, similar and even arguably superior technologies without such market acceptance will retain their “proprietary” label.
New technologies often pave their way by latching on to the industry mind share, establishing a momentum, a market pressure toward an “emerging standard”, and creating customer expectations in the process.
We know this pressure as hype.
Such technology “standards” are almost exclusively the outcome of commercial market forces (take Java as an example).
When applied to technology, the word “standard” in the dictionary sense meaning “accepted as normal or average” must simply refer to where the market is at, or where the market is going.
So it is that an application with no market share should appeal to standards (or hype) to demonstrate that it is going where the market is going. Conversely, a product that already has significant market share is where the market is at by definition, and is best served by not going anywhere, especially if that would mean a loss of differentiation, and hence exposure to increased competition, and loss of company value.
Successful packages simply have less to gain by implementing new or emerging standards. These vendors succumb but slowly to customer demands for more openness and increased use of “standards”.
This inertia is the standards paradox
Vendor / standards paradox
EAI Success Factors
www.eservglobal.com – [email protected] page 6 | 25
.Since the general rule is to perform package configuration
only, we have suddenly introduced a set of constraints over
the information available. Notwithstanding the standards
paradox (see inset), we can today acquire packages that
are used by different parts of the business, yet employ
similar base technologies. This does indeed solve one
aspect of the problem. Yet with packages “as delivered”
this benefit of underlying technology standardisation
generally starts and ends with adherence to the standard
operating environment. Technical compatibility is an
advantage, but is still a long way from information
compatibility
Where are the standards? Each application is traditionally designed to provide its own
self-contained system. It is often a stovepipe from
presentation layer through to database schema. Each
brings its own peculiar baggage, its own infrastructure
requirements, its own configuration technique, its own
information concepts, application specific commands,
languages even, and its own interpreted blend of standards
so that the overall architecture remains essentially product
specific, closed, proprietary.
Clearly we need open access points if we are to integrate
applications and remain sane. Now we’re talking
architectural standards that in general do not exist.
� There are no real architectural standards for applications and information
In general, applications can be carbon dated by their
overall technological make-up. Depending upon the design
date of the application, some will be single tier, some 2-tier,
some 3-tier and a few n-tier. Some are object-oriented,
some service-oriented, some task-oriented, some process-
oriented, some database-centric. Some will resemble a
highly generic toolkit that requires extensive work to turn
into a business application, while others will offer a specific
out-of-the-box solution as an industry vertical.
Certainly the API is not a new phenomenon, but the trend
toward providing specific access points in a package for
integration with collaborative packages is indeed a
relatively recent one. While vendors are increasingly
recognising this need for their packaged software to be
“open” for integration purposes, not all application
packages conform to the same technologies in this area.
Even where they do, different packages will use exactly the
same technology in completely different ways. So while
technical standardisation continues to break down the
barriers to application integration, significant hurdles still
remain.
The bottom line is that there is still no industry standard
way to access an application. There are only application
specific ways. As we will see, there is also no right or
wrong technology for application access. The technology
actually favoured by a particular application will probably be
reflected in the maturity of the software, its platform
leanings, and the hype of the day when its integration
access was first considered, if at all.
Your EAI tools will certainly assist with the technology
integration but despite promises from tools vendors, you
are completely on your own when it comes to sorting out
how to perform information integration. This involves
architecture, and this is where the decisions you make can
have a dramatic impact on the effectiveness of your
solutions.
The development of the NSW and Victoria state railways in Australia involved exactly the same sort of technology and engineering, but their integration (cross-border travel) was rendered quite incompatible by the difference in a single parameter – the gauge
Architectural philosophy Underlying any architectural strategy is a basic
philosophical approach. Before getting down to the EAI
architecture one must adopt the appropriate philosophy
with respect to the role of enterprise application packages
within the organisation.
If you have one or two major application packages, such as
CRM and/or ERP, and you have established vendor
relationships, skills, and solutions with these packages as
the IT centre-piece, then you will most likely adopt a
package driven approach to integration. The information
models of the core packages will dominate the enterprise
view of processing. Their business processing model will
determine the points at which they need to integrate with
other systems to fetch data, or request external operations.
They may even have their own flavour of EAI tools and or
preferred technologies to work with, as well as their B2B
extensions. In such a case the EAI architecture most likely
resembles a further extension of the package information
models and the core integration logic resides within the
packages themselves. This type of approach suits small to
medium enterprises that have invested heavily in a
package, adapting their business processes around its
Small th ings can make a b ig d if ference
EAI Success Factors
www.eservglobal.com – [email protected] page 7 | 25
capabilities, and aligning their strategy closely with the
package vendor direction.
On the other hand, if your approach to packages is to view
them as providing specific domain based functions, or point
solutions, that are connected through the various business
processes into an over-arching business solution then you
should definitely adopt a process driven approach to
integration. This requires a strong architectural approach
that maintains an enterprise information view quite external
from the packages and vendor directions. Applications are
relegated to a passive role, acting predominantly as
servers that ensure their domain based information is
managed safely. The core integration logic with its process
flow resides within the EAI middleware space. This type of
approach suits large enterprises, or those with multiple
component based applications, that have their own IT
strategy and wish to remain agile and as independent of
package vendors as possible.
� A fundamental architectural decision is to determine what drives the enterprise information model.
The focus in this paper is on the process driven approach.
Maturity of components According to a rather visionary 1997 GartnerGroup report
“component technology will not be mature enough to
support mainstream enterprise computing until 2001”. It
went on to predict that by 2001 “it will be difficult to find an
application environment that is not component-based”.
Further, the report suggested that benefits will only come at
the price of retooling the enterprise environment with a new
generation of middleware. However the most significant
advice GartnerGroup gave then on the trend toward
component based technology is as follows.
“Most of the benefits of component computing can
be achieved without ORB and OTM technologies.
The benefits are based on the principles of good
application design through normalization and
service orientation. These principles can and
should be applied today in all new application
projects, and adherence to these basic principles
will enable enterprises to prepare for the next
generation of application architectures, while not
having to prematurely invest in technology that is
still “in transit.” Even if new trends and events
change the course of the industry’s evolution, the
building of normalized and service-oriented
systems today will still prove to have been the
right strategy.” 2
Yes – that was good advice then when mainstream thinking
was about CORBA based ORBs and Object Transaction
Monitors – but it is even better advice now with the
maturing J2EE technologies and the rise of Web Services.
� Use a service oriented architecture with component based deployments
To further emphasise the point, it means that achieving
some, if not all, of the benefits that stem from a component
based architecture does not necessitate the use of object
based technologies. Indeed, regardless of the technology
used, the separation of concerns into distinct functional
components reduces software dependencies leading to
improved manageability and reduced costs. A good EAI
strategy is based on a service-oriented architecture for the
logical view, and uses a component based approach for the
deployment view.
Granularity One issue when dealing with components is getting the
granularity right. In this respect it is often a mistake to
perform top down analysis and design, in an effort to
“theorise” the nature of the enterprise in a conceptual
manner, before attempting to match the results to the
current suite of IT application packages. There must be a
pragmatic approach that recognises package granularity as
an actual boundary fixing significant aspects of the
componentised view, and having specific influence on the
information model.
Beyond the package boundary extreme care must then be
taken when attempting to impose a more granular view
regarding information and functions that are not inherently
componentised. This will often result in an impedance
mismatch between the externally imposed view and the
packages internal view. Apart from this, increasing the
granularity of the view past a certain point merely leads to
unnecessary complexity. This in turn brings associated
problems and side effects.
� Strike a balance between the conceptual and actual views in order to get the granularity right
2 Object Transaction Monitors: The Foundation for a Component-Based Enterprise (Y. Natis) Gartner Report, 5 August 1997
EAI Success Factors
www.eservglobal.com – [email protected] page 8 | 25
Clearly, package boundaries determine much about the
segregation of business processing logic. These
boundaries will in turn influence the scope of each
middleware component, such as adapters, to reduce
dependencies and keep development and operational
issues simple. However, in abstracting further to business
objects it is common to require access to multiple packages
even where an apparent single action, such as “update
customer address”, is involved. Where functions are
placed, and in what components, and with what level of
granularity, are significant design challenges. Keeping an
appropriate separation of concerns will help determine
component granularity and clean component boundaries
will reduce maintenance efforts when changes are
required.
Meeting the challenges This section has deliberately highlighted many of the areas
where pitfalls can be encountered in an integration project.
However, the most important message concerns the need
for an architectural approach.
� To be successful with EAI you need to develop a robust enterprise integration architecture
Through techniques such as abstraction, concealment,
encapsulation, conversion, normalisation, translation,
transformation, and use of design patterns, the technical
process of integration can be reasonably understood. With
contemporary application packages an integrated
technology view of applications is now largely available.
Technical “wiring” is, generally speaking, quite a simple
matter. However, in order to get to the desired target, an
integrated business process view, the application
integration methodology must support mechanisms that
encourage a more consistent functional view across
packages. This amounts to convergence of the information
models, and this remains a major challenge. EAI tools
assist in doing the work, but they cannot provide the
knowledge necessary to describe how the work should be
done. To do this, you will need a solid, well-communicated
enterprise integration architecture.
Part 2 – Developing an Enterprise Integration Architecture This section will draw on the challenges outlined in Part 1
and develop a set of principles that are considered most
useful in developing an overall architectural approach to
integration. As principles are introduced they are shown in
UPPERCASE, and referred to throughout for consistency. In
Part 3 the implementation of this architectural approach is
discussed in the context of general EAI technologies.
Adopt an evolutionary approach Like Rome, no strategic integration architecture will be built
in a day. Common sense dictates that you should not
attempt to implement the entire architecture on the first roll
out. You will need to be content with some sort of
compromise. Techniques, such as the Architecture Trade-
off Analysis Method 3 are well suited to this exercise in
order to isolate those attributes of the architecture that
have the most value. However, at an early stage of the
project, finding the correct balance between architecturally
driven, and business driven outcomes will remain a matter
for careful scrutiny and decision-making.
Where possible, maintain a consensual process so that all
stakeholders take part in the decision-making and
understand the reason for any trade-offs. This will maintain
expectations at realistic levels. Its important that everybody
understands that when things don’t get done it isn’t
because they are “too hard”, but rather that there is a whole
bunch of things to get done and they don’t all fit the time
and budget available. Make trade-offs a strength of the
approach, rather than an opportunity for others to be
critical.
� Construct a non-purist approach to architecture that recognises and makes a strength of the necessity for trade-offs
Develop and deploy in small increments, and establish a
strong feedback loop to fuel the evolutionary approach.
Whilst architectural robustness is highly important to allow
for future business growth it is not in itself sufficient to
deliver a commercial benefit. A good leader will understand
this intuitively. Remember, each iteration should have new
function, and new infrastructure, and don’t be afraid to
perform some re-factoring in the process.
Keep it simple Parsimony is a natural law of science that states, “things
are usually connected or behave in the simplest or most
economical way”.
This is a good principle to remember when developing an
EAI strategy, as a robust architecture will leverage simple
patterns that work effectively, and use them repeatedly to
3 ATAM is a methodology developed and owned by the Carnegie Mellon Software Engineering Institute
EAI Success Factors
www.eservglobal.com – [email protected] page 9 | 25
quickly open up the processing power of the enterprise
application set. With all the complexity that is inherent in
EAI it is really important to introduce a strong dose of
simplicity. This will dramatically reduce risks. It does not
mean that you can cut corners, but it does mean that when
faced with two choices with equivalent outcomes the
simplest one always wins. Recognition of PARSIMONY is a
key theme of integration.
� Develop, then reuse simple, repeatable architectural patterns that are known to be effective
Understand the goals What do you want from your enterprise integration
architecture?
Firstly, it must be simple and understandable enough to
be conveyed to both managers and developers alike. That
is, the audience will be both technical, and non-technical. If
management do not “get it” then the chances are others
won’t as well, and you will fail in perhaps the most
important aspect of any architecture – its ACCESSIBILITY.
Misunderstandings at the design stage of just one project
can have enormous consequences for many years to come.
This is not to say that all layers of the architecture are
simplistic – although simple concepts can guide complex
issues – but what is most important is that all layers show
CONSISTENCY. An accessible, consistent, and well-
communicated architecture will provide enormous payback
by reducing risks in a multitude of situations.
Secondly, the architecture must be clearly aligned with the
business strategy. It is in this respect of ALIGNMENT that
you will find that not all EAI technologies are equal.
Considerations such as merger/acquisition strategies,
business partnerships, geographic distribution, network
topology, currency of business information, product rollout,
customer and transaction growth, etc, will need to be
accommodated by the architecture. This is an area that can
make EAI vendor selection a most critical exercise.
Thirdly, the architecture will need to accommodate
change gracefully. The one constant of all business is
change, and “graceful accommodation” means a number of
things: ADAPTABILITY, being the ability to reuse components
to produce changes in the processing environment;
FLEXIBILITY, being the innovative use of technology in
opportunistic ways; and ISOLATION, meaning the
independence of components in order to minimise the
impact of change. Success in all these aspects will render
a low cost of maintenance.
� An enterprise integration architecture is a significant corporate asset
The information model Information modelling is where things become really
interesting. Remember, we are not just describing how to
perform integration using EAI, but how to develop an
enterprise integration architecture to guide the IT direction
over a series of projects that will satisfy short-term
requirements as well as delivering strategic commercial
outcomes.
It’s important to note that an information model is
conceptual, and remains the most important link in
maintaining ACCESSIBILITY of the architecture to
management and developers alike. For this to be
successful the model must obey the PARSIMONY principle
and use simple, repeatable constructs to provide a
conceptual framework that aids communication. Of course,
details will ultimately become the focus of the real work, but
these should be framed and understood in the context of
the enterprise model. In fact, as long as the meta-model is
defined, then both the information model and its
implementation details are free to evolve with the business.
� Understand the enterprise via its domain model and business objects.
The most widely used abstraction in the information model
is the business object. A business object is a “real world”
entity, such as an Account, an Order, a Customer, etc,
and should not be confused with the OOP definition of
object. A business object can be implemented in a plethora
of ways, but OO thinking introduces another key principle
of our enterprise integration architecture, namely
ENCAPSULATION, whereby data and behaviour are isolated
and strictly the concern of the business object definition.
Whilst it may be tempting to start with a top-down modelling
exercise, most organisations will have a number of
application packages that can already be understood, via
abstraction, as providing functions that act against a
business object. A top-down approach can be useful in
setting a framework and is often available from standard
industry models, such as the TMF Model in the
Telecommunications industry. But generally this can only
go so far before the pragmatics of the actual IT package
environment must be factored into the exercise. Use the
idealised top-down approach to understand the domain
EAI Success Factors
www.eservglobal.com – [email protected] page 10 | 25
model, and then perform bottom-up analysis to fit
packages into one or more domains.
You will need to make an inventory of the various types of
business object found in each application package, but
remember that we are not trying to produce a model of
every feature and function in every package. The objective
is to isolate core business level function, and only where it
benefits from public exposure. Clearly an iterative
approach will be useful here, starting with the most
obvious business objects first, and then later exposing
others only as required.
There is something very important that you will notice
during this exercise – business objects do not always map
cleanly, one-for-one, to a specific application package. In
some cases a package may have a complete
implementation of a particular type of business object, such
as an Account. In other cases the complete view of a
single business object, such as a Customer object, may
actually be spread across a number of packages. This
makes ENCAPSULATION more difficult, but at the same time
more important to ensure consistent access to the
appropriate business data.
� Business objects provide the most useful way to conceptualise the business/IT environment.
Business objects Through the modelling exercise you will have established a
baseline of business objects and as part of this you should
have identified the main data properties, and functions
supported by each type of business object.
Drawing on standard OO principles we understand that the
business object encapsulates its data as a set of
attributes. In most cases business objects will be made
persistent in an RDBMS and the attributes will likely
correspond to columns in the database. This is of course
the simplest mapping only, and more complex or virtual
attributes are possible. However, it is also likely that not all
persistent data items are of public interest as attributes.
This is where judgment during abstraction is necessary to
maintain the simplest view of business objects.
The functions supported by business objects are termed
operations, and again we should only be interested in
isolating operations that can benefit from being made
public, or accessible externally, from the package. In many
cases there may be no clear specification of these
operations, especially where most application functions are
initiated via a user interface. Where an API exists then
there is a clearly available set of functions that have been
externalised, and are candidates for business object
operations. In other cases the operations may be inferred
from standard processing practices.
Often forgotten, but of the utmost importance to our EAI
implementation, is the notion of events. Rarely, if ever, is
read access to a business object of interest outside an
application package, but every data update operation is, in
theory, of potential interest to another party. Clearly,
advertising every change to every business object is both
unnecessary and prohibitive, so selective use of events will
be an ongoing concern.
Identifiers In order to perform any operation on a business object we
need to specify the type of object and the operation
involved, but we also need a way of indicating exactly
which object instance we are referring to. That is, we need
to understand what comprises a business object
identifier. Clearly, each instance of a business object must
have a unique identifier in order to refer to it
unambiguously, but in the context of the enterprise
integration architecture there are other concerns.
In many cases the choice is simple. For many common
business objects the primary supporting application
package will already have an identifier, such as an
AccountNumber, or CustomerID, that is intended for use
inside of, and sometimes outside of, the organisation.
These are ideal as the architected business object
identifiers to maintain ACCESSIBILITY in the enterprise
integration architecture.
In other cases the choice may appear equally as
straightforward, as many objects will have a database
primary key that passes the uniqueness test for identifiers.
However, if this key is not designated for external use,
much more care is warranted. Since EAI involves invoking
operations against business objects from external sources
(ie. outside of the application package) then we will be
effectively rendering the internal database keys of a
package into foreign keys when passed as object
identifiers. They in turn will be stored in other systems and,
if we are not careful, the indiscriminate use of such
identifiers will violate both the ISOLATION and
ENCAPSULATION principles, making future changes replete
with dependencies, and hence impacting the
MANAGEABILITY of the architecture.
EAI Success Factors
www.eservglobal.com – [email protected] page 11 | 25
� Identifiers should be the keys to your enterprise integration architecture, not the locks that prevent change.
The widespread propagation of ill-conceived identifiers can
be particularly dangerous when considering a package
upgrade, such as installing a new release of a major
application package. Changes to the internal package
architecture, in either key structure or behaviour, can have
extensive rippling effects throughout an EAI
implementation. Even where no impacts exist, the lack of
ENCAPSULATION means an increased analysis effort is
required merely to ascertain this fact.
A further concern when foreign keys are distributed to other
systems is the degree of synchronisation effort involved.
This can appear to be under control during normal
processing but the issues are compounded during error
processing. The need to recover a package from a
previously backed up state can dramatically increase the
complexity to intractable proportions. Such synchronisation
issues are not specific to identifiers alone and will be
addressed in a later section in more detail.
Operations The first thing to notice about operations is that they almost
always come in pairs – a “do” with its opposite “undo”
operation, such as create/delete, get/set, read/write,
activate/deactivate, etc. Consistent use of operations will
greatly assist the architecture’s ACCESSIBILITY.
� Developing a set of standard operations will increase the understanding of the enterprise integration architecture.
There are two standard object level operations, namely
create and delete. Given that loss of any business data is
of grave concern, in many cases the business rules around
business object deletion are highly restricted and hence the
delete operation is often not made public, or its semantics
are modified to perform a logical deletion only, as a
protective mechanism. Some care is necessary here so we
don’t break the CONSISTENCY theme by using delete to
sometimes mean physical deletion, and other times mean
logical deletion. One alternative is to introduce new
operations, such as disable, remove, or close, etc, in a
consistent fashion to set an inhibited state in the business
object where it has not been deleted. However, given a
natural inclination for people to say things like “delete the
Account” (even when it can’t be physically removed due to
audit history issues) a potentially more useful approach is
to use delete in the soft (logical) sense and introduce a
destroy operation which has a more permanent connotation
and would indeed mean physical removal. Since the
destroy operation will seldom be made public this simple
convention can improve ACCESSIBILITY and MANAGEABILITY
of the architecture, since what we say is what we do.
Similarly, there are two standard attribute level operations,
namely get and set. Other names are often used for these
operations, such as read and write, get and update, or
many others according to corporate culture and
architectural taste. Although this is a matter for preference,
again the theme regarding CONSISTENCY is the most
important aspect.
Apart from these standard operations, for object level and
attribute level access, there can be any number of more
specialised operations that represent actions against a
business object. In general it is useful to develop a
standard operations list that covers other common
functions.
create Create a new business object (it’s an error if it already exists).
delete Logically delete an existing business object.
destroy Permanently delete an existing business object (seldom used).
add Add a new or existing business object to a container object.
remove Remove a business object from a container.
get Get the attribute values of one or more existing business objects.
set Update the attribute values of one or more business objects.
notify Issue an event concerning a state change in a business object.
list Return a list of business object identifiers based on the request criteria.
receive Receive data or bulk transfer of information.
send Send data or bulk transfer of information.
start Start a process against a business object.
suspend Suspend a process against a business object.
resume Resume a process against a business object.
terminate Terminate a process against a business object.
Common operat ions
EAI Success Factors
www.eservglobal.com – [email protected] page 12 | 25
More specific operations can often be expressed as an
extension or qualification of one of these common
operations. For example, setStatus may be more
meaningful as a specialised operation if it is commonly
used, rather than many set operations with a single status
attribute involved.
Events Event-driven architectures have long been the domain of
applications, at least internally, and especially of GUI
programming. Yet few application architectures have given
thought to the usefulness of internally recognised events to
external processes. Client/server architectures of the
1990’s reinforced the “pull” model of programming where
the back-end of the application, being the portion with the
business logic (ignoring those outdated fat GUI apps), was
quite passive, surrendering its data only when requested by
the client. With the advent of Web applications, using thin
clients and even simpler client/server interaction
capabilities, this basic model has been retained. The result
is an extremely limited implementation set of “push” style
functions, and in many cases what appears to be a “push”
is simply a “pull” based on periodic polling or similar
mechanisms.
� Events are the real secret to an efficient and highly flexible implementation of the integration architecture.
Let there be no mistake – the efficiency of any enterprise
integration architecture will be directly proportional to its
ability to leverage events. Complete reliance on a pull
model will inevitably mean windows where the business
view is not synchronised. It will also cause a tighter binding
between applications in the client role and server role, and
a corresponding decrease in meeting our principles of
ISOLATION and ENCAPSULATION.
Events, by definition, are not specifically directed to a
receiver, but conform to a publish/subscribe model. The
application publishes an event to the middleware layer, and
the middleware is responsible for delivering it to any (zero
or more) interested parties. The event receivers must
previously have registered interest by subscribing to the
middleware for that particular event type. In this manner
events provide inherent support for a loose coupling
between applications, an important mechanism for
ISOLATION.
Application events should be lightweight. They provide a
notification of an application state change only, and should
not attempt to overload the event with excessive
information. At a minimum, the business object type, the
event type, and the object identifier are required. Rather
than introduce a new name space, the event type is often
modelled as an operation – for example as a notify with
additional qualification as necessary, such as notifyStatus.
Typically such an event would carry extra parameters, such
as the new status value for a notifyStatus, and maybe the
old status value as well.
Receivers of the event can firstly determine whether they
are interested in any more processing, and if so, exactly
what extra information they require. After extracting the
business object type and identifier, they then “pull” the
appropriate data by issuing a separate operation, such as a
getDetails, to gather sufficient up to date information for
their subsequent processing needs.
Service oriented architecture So far we have concentrated on a business object style of
modelling for the enterprise integration architecture. These
modelling concepts are useful but we will not be interested
in inheritance, polymorphism, or similar OO implementation
concerns. These will overload the conceptual model and
introduce too much complexity for the heterogeneous IT
application environment. Instead we now turn our attention
toward a service oriented architecture.
In the service oriented architecture each business object
operation is considered to be a separate service. This is
the level of granularity that determines the demarcation line
of our architecture. Behind the service stands the private
and encapsulated implementation that users of the service
should neither see nor care about. On the entry side is the
public facing view. Hence, the service represents an
interface contract.
� The most accessible view of integration functions is as a set of services.
Having modelled access to application functions as
operations on business objects, we now flatten the view to
a set of service names within the service oriented
architecture. The service itself now becomes the unit of
invocation. Since our enterprise integration architecture will
require that each service is uniquely identified, a simple
and useful convention is that a service name is the
concatenation of operation, businessObjectType and a
qualifier, such as setOrderStatus,
listCustomerOrders, notifyAccountUpdate,
createAccount, etc. The qualifier can really be any string
that makes the service name both unique and meaningful.
EAI Success Factors
www.eservglobal.com – [email protected] page 13 | 25
Other information is passed as parameters on the service
call. This satisfies the architectural principles of PARSIMONY
and CONSISTENCY.
There are a number of subtleties of this approach, where
the conceptual view is maintained as business objects, and
the logical view is implemented as a set of services. These
subtleties will become clearer in later sections. Two
immediate benefits can be noted. Firstly, services become
the unit of mapping to an application, thus allowing an
abstract business object to be implemented in more than
one application, improving the ADAPTABILITY of the
architecture. Secondly, service calls may be standalone
functions (ie. not modelled as business object operations)
and hence this approach conveniently accommodates any
type of invocation in accordance with our FLEXIBILITY
principle.
Process driven integration Now that we have examined the enterprise integration
architecture as a static model comprised of business
objects, and with functional access manifested as services,
we need some way to understand the architecture’s
dynamic model. What is the motive force behind the
architecture? What will drive the integration process to
perform service calls between applications?
In Part 1 of this paper it was suggested that there were two
philosophically different approaches, one being a package
driven approach, the other being a process driven
approach to integration. Under the package driven
approach the application(s) will take the onus on driving
integration by invoking integration services at the
appropriate points within the application’s processing path.
This will most likely be the preferred approach adopted by
package vendors to maintain a grip on the enterprise IT
environment. However it is clearly an “all your eggs in one
basket” approach that is unsuited to a large enterprise with
a more strategic outlook. Hence the focus here is on an
architecture that locates integration processes outside of
the applications and, indeed, in the middleware space.
Under this process driven approach we actually leverage
the relatively passive nature of applications as servers. The
middleware itself acts as a client on behalf of other
applications. It calls server applications by invoking
services and drives integration via workflow. Our goal is to
be able to automate an entire end-to-end business process
by triggering a sequence of service calls based on
application state changes. This is sometimes referred to as
straight through processing or by the even catchier
phrase, zero touch enterprise – easy to say but not so
easy to achieve in practice! Remember that one of the
secrets to this approach is the ability of the middleware to
detect application events that is most difficult given lack of
application instrumentation in this area. Some methods for
solving this problem are briefly discussed in a later section.
Once we have established a set of baseline services our
goal is to be able to leverage these existing services in new
and potentially innovative ways. The notion is that we can
introduce new business processes by threading a series of
service calls in a new manner. This means no change to
applications and their middleware services, but only a new
middleware workflow definition in line with our ADAPTABILITY
principle.
� Business processes are managed externally from the application packages.
To begin with, applications should only project their
standard functions as a set of primitive services. These
can then be combined by the middleware in various
patterns to produce processing fragments, and eventually a
number of fragments are combined in a higher level
workflow to implement entire business processes. Of
course, under our consistent architectural approach, each
middleware process itself is projected as a callable service.
Hence a hierarchy of services can be evolved and
dependencies managed entirely within the middleware as
part of our enterprise integration architecture.
Only the middleware drives updates. The general rule is
that any application state change is driven via a
middleware push operation. Applications do not request
information to update their own state. Rather, the
application emits an event that is processed by the
middleware, where an event handler pulls the required data
from one or more sources, then updates the application
through a primitive service call. By delegating this type of
logic to the middleware there is greater FLEXIBILITY and
more MANAGEABILITY over the environment. This aspect is
further explored later when discussing middleware patterns.
The business view It is important to note that the relocation of business
processes outside of the application packages does not
relocate the business data, nor does it undermine the value
of the packages themselves. Although the middleware
moves data between applications this data transfer is
transient in nature and no business data is managed by the
middleware. All business data must have a system of
record, being an application.
EAI Success Factors
www.eservglobal.com – [email protected] page 14 | 25
A system of record is the reference point for a specific type
of business data, and should be used to synchronise the
view in other applications. Hence a Billing or Financial
package will be the system of record for an Account, while
the CRM system is likely to be the system of record for a
Customer. As business transactions take place and
applications are updated the idea is that the business view
is synchronised by transferring data between applications.
The lower the middleware latency the more timely the
business information resulting in a more consistent
enterprise view.
� The business maintains a consistent enterprise view via application packages
The result is that the business maintains its span of control
via the various application packages while the middleware
invisibly greases the wheels of the enterprise by
automating business processes, thus avoiding time
consuming manual processing and its associated errors.
Summary of principles In this section we explored in detail concepts behind the
construction of an enterprise integration architecture. In
that process we have developed a number of principles for
further architectural development and implementation.
These are summarised below, and will be referred to as we
describe the guidelines for implementation in the next
section.
Principle Meaning
ACCESSIBILITY The architecture must be comprehensible by all stakeholders including the Business, IT managers, IT architects and developers.
ADAPTABILITY Components of the architecture developed for one purpose can be reused in new ways as the business changes, or new requirements emerge.
ALIGNMENT The architectural and implementation approach is aligned to the Business strategy, the IT strategy and the corporate culture.
CONSISTENCY Concepts, terms and patterns employed by the architecture have a precise meaning and are used in a consistent manner.
ENCAPSULATION Architectural services are abstracted and behave according to a service contract description, such that their implementation details need not be exposed.
FLEXIBILITY The architecture supports opportunistic implementations by accommodating sudden business changes via a flexible set of technologies and methodologies.
ISOLATION Components of the architecture are designed to perform specific functions in an independent manner to isolate the impact of any changes.
MANAGEABILITY The architecture must cater for effective operations where the components are robust and behave consistently in response to environmental and management procedures.
PARSIMONY The architecture is only as complex as it must be, and no more, such that it fulfils all principles whilst emphasizing the simplest approach possible.
SCALABILITY Components of the architecture should scale to provide mechanisms that support both short term increases in load, and long term growth in application resources.
Archi tectural pr incip les
EAI Success Factors
www.eservglobal.com – [email protected] page 15 | 25
Part 3 – EAI Implementation Guidelines Now that the challenges are understood, and the
architectural principles have been explored, this section will
examine the EAI implementation considerations in a more
detailed and practical manner.
Message bus At the heart of EAI there is some type of messaging
technology and a method of transport. While the term
message bus is commonly used to describe this concept
the actual message and transport architecture, or the
communication model, can vary, and may not be bus-like
at all. When looking at the message bus technology it is
important to understand a number of factors, such as:
transport protocols used; the connection model between
the requester and the target for handling any requests; the
abstraction model for naming and locating source and
destination services; support for synchronous and
asynchronous processing, including an event services; and
the syntactic and semantic properties of the messaging
layer.
� The message layer abstraction model is the key determinant of your enterprise integration architecture
With CORBA and other distributed object technologies, for
example, transport connections are point-to-point and in a
large-scale integration environment the number of
connections does matter, significantly affecting
SCALEABILITY. Many specific EAI technologies employ a
more hub-like architecture, where connections are limited
to one per hub and spoke pairing, whilst others may be
somewhere between, with shared connections between
major nodes on the “bus” and point connections radiating
from these. Since connections provide a form of context, it
is also important to understand how the abstraction model
for naming and location services maps to messaging and
transport in the implementation. This will affect the
frequency of opening and closing connections.
There are three principal abstraction techniques found at
the messaging layer – objects, services, or queues. It
should be noted that many EAI technology sets support all
three and this in itself can create confusion. The naming
and location services may differ in the way they are
provided for each type of abstraction and certainly the
implementation at the programming level will differ. Hence
knowing when and where to use each abstraction layer is
an important concern of the enterprise integration
architecture. More will be said on this in the next section.
Regardless of the actual technology selected for an
implementation, the concept of a message bus is a
powerful one that provides ACCESSIBILITY in describing the
architecture’s distributed messaging capability. Its main
benefit is that it provides location transparency, meaning
that messages are sent to logical destinations, and the bus
deals with the actual routing to the target process. By
supporting ISOLATION the bus simplifies the concerns of all
message users, for requests, replies and events.
� A major goal of any EAI implementation will be a loose coupling of systems.
What loose coupling really means Loose coupling is a design goal for any EAI implementation
and is a key mechanism to deliver strong compliance with
the ISOLATION principle. There are three areas to consider
in the context of loose coupling: connections, processing,
and information. Let’s explain a little about each of these
aspects before we look at how middleware technologies
can deal with the issues.
From the connection point of view, loose coupling implies
that a connection that is set-up between two components,
one acting in the client role and the other acting in the
server role, is never directly specified by the
communicating components. There is always a mediation
function involved that will determine the appropriate
platform and process that should be connected. This level
of indirection may occur at configuration time, and/or at
runtime via a broker.
A change in the configuration can be used to modify the
target addressing or similar communication parameters
used to control connections at the transport layer. This
provides coarse-grained environmental control, particularly
during stages of system migration, upgrades, or platform
changes, where there might be several middleware
“instances” to deal with such as production and test
systems, multiple applications (say during phased
migration), back-up and recovery states, etc.
During runtime, there may be an active mediation process
that dynamically manages connection set-up. This will
generally work at two levels: at the abstraction layer, to
provide an object request broker (ORB) mechanism, or a
message broker service; and at lower management layers
to allow for middleware load balancing, failure recovery,
EAI Success Factors
www.eservglobal.com – [email protected] page 16 | 25
etc. In any case, the loose coupling of connections is a
significant factor in reducing the impact of any change, and
aiding the overall MANAGEABILITY of the infrastructure.
From a processing point of view, loose coupling implies
that each application can perform its processing role
independently of the other. This view is taken in a strong
sense to mean independent in time, as well as being an
independent implementation. Note that this precludes
distributed online transactions from being defined as
loosely coupled as there is clearly a tight execution
dependence between the participating components.
Independence in time is what is really meant by the term
asynchronous processing. Information that has
completed processing in one system, can be transferred in
part or whole to other system(s) for subsequent processing.
The transport delays involved between any two systems
may not be predictable from either party. This aspect can
be extended further, such that although the systems are
“coupled” (albeit loosely) they do not have to run
concurrently to perform an integrated function. That is, they
could have asynchronous availability. In practical terms,
this means that information processed by one system, can
be transferred in a store and forward fashion, to other
system(s).
It is important to note that asynchronous communication is not the same as asynchronous processing, which is not the same as asynchronous availability. The subtleties here are profound, and will continue to be explored in the remainder of this paper.
Finally, from an information point of view, loose coupling
implies that any two systems involved only exploit public
interfaces, and do not have deeper knowledge of the
other’s informational model. This can be a common pitfall
even where initial intentions are good.
For example, during the development of an integration
solution the server behaviour might be “learned” and
accommodated by the developer who codes the client
behaviour accordingly. This will break ENCAPSULATION since
specific server behaviour is widely dispersed into all
systems that make use of the client code. Even where the
first two aspects of loose coupling (connections and
processing) are observed, this type of tight informational
interdependence renders loose coupling from a practical
implementation perspective quite ineffective.
Loose coupling and technologies In almost every case the use of distributed object
technology, such as CORBA, COM or EJB, will produce a
tight coupling. This is primarily due to the fact that objects
are designed to maintain state, and the various interfaces
of each object must be understood in relation to both the
object’s state and its behaviour. You might have noticed
that this need to manage state leads in turn to the
introduction of additional programming artefacts. Examples
are a “handle” to maintain context for communication with a
specific back-end object, or a “call-back” object to support
asynchronous replies. A “handle” is a program level
artefact that represents to the caller not just a type of
object, but indeed an object instance. This program level
binding of caller to object produces an orientation that
clearly favours synchronous processing.
Support for asynchronous processing is generally dealt with
poorly by distributed object technology. When handling
complex transactions, errors, or failure scenarios, things
get even more intertwined. It’s a slippery slope that leads to
information interdependence. The result is a tightly coupled
model that is suitable for online activities using
synchronous communications, such as between the
presentation layer of an application and its back-end
services. However, it is unsuitable for EAI messaging as it
violates ISOLATION by assuming a great deal of knowledge
of the server objects and their behaviour within the client
application. This makes loose coupling difficult to
implement and in a complex enterprise environment this is
a change control nightmare.
There is an important distinction to note here. There is
absolutely no implication that distributed object
technologies per se are inferior. Indeed the enterprise
integration architecture we have outlined is based on object
principles, and object technologies are an ideal choice for
implementing software components of any sort, including
integration components. The native programming model of
distributed object technologies provides for application
integration in a broad sense, and is suitable for many
situations, but not as an overall EAI solution.
EAI deals with application-to-application integration, and hence the terms client and server, as they apply to EAI, refer to roles only. An application can adopt the client role for one set of integration interactions, and adopt quite separately the server role for another set of interactions.
Cl ient and server ro les
Asynchronous is an ad ject ive
EAI Success Factors
www.eservglobal.com – [email protected] page 17 | 25
At this point the reasons for introducing a service oriented
architecture, and of course a service based technology,
may be clearer. The key here is that services are
stateless. This means that a service behaves with
CONSISTENCY, in exactly the same way no matter when or
how it is called. From a middleware perspective, each
invocation of the service stands alone. There is no client or
server context maintained between calls. Having no state
means that calls made from a single client to the same
service may be passed to different instances of the
destination software, thus catering for SCALEABILITY in a
horizontal direction. Any “handle” required to send the
message is only that required for middleware access, and
is not a handle to a specific service or object.
Rather than invoking the service directly through a specific
API, the client passes a message containing the service
call details to a middleware component that acts as a
service broker. It locates an appropriate component
instance that can handle the call, typically via another
middleware component that can queue the message to the
actual target for processing. Depending on the type of
middleware API used the client may be released and can
continue processing, however, there is no guarantee that
the call can be serviced immediately. Hence queues,
whether persistent or transient in nature, are a necessity.
For middleware that exposes the “queue” as a visible
resource, this means that the client will target logical
destinations by queue name. Since a queue also tends to
be an external entity with configurable properties it
represents a clear point of intersection for the abstract view
and the implementation view. This highlights the need to
address architectural decisions on how to name queues,
whether different types of messages can be sent to a single
queue, where to put queues, etc.
Alternatively, the abstract model can remain at the service
level only, whereby queues remain an internal middleware
and/or implementation detail. The service broker allows a
client to invoke a service by name, or emit an event to a
particular service name. Servers may register to handle
request services, or to listen to various event services.
Service invocation can be implemented by a
publish/subscribe protocol, a message/queueing protocol,
or other transport models. In any case queues are still used
to store messages en route to service handlers, and may
be in memory, or persistent, however they are not exposed
to the service users.
To take things further, it is possible for the client to make a
service call in a form that has no specific information
binding with an existing server implementation. However,
the messaging layer can use transformation logic to
reformat the request into a form that can be processed by a
target application. In this respect the middleware actually
honours the client view of the service interface contract,
then uses workflow and transformation tools to invoke the
server view to produce a response. In fact the client view
for a single request may involve information sourced from
multiple server calls managed and merged by the
middleware. By allowing separate client and server views,
and rendering them complete and compatible using
middleware tools, we further increase loose coupling, and
remove dependencies on specific application packages and
their defined services. When replacing an application
package an impact to this transformation logic in the
middleware can be expected, but there is an improved
chance that further impact on application packages is
limited.
While there are distinct advantages of the service oriented
architecture over a distributed object architecture,
particularly to achieve loose coupling, much will depend on
the selected tool sets, as well as the organisation’s
experience and skills, meaning the overall ALIGNMENT of the
EAI middleware with the business and IT environment. The
notion of “one size fits all” seldom applies to IT.
� Developing adapters with the appropriate architectural concerns and granularity of function is a critical aspect of EAI success.
Adapters Most applications do not ship with middleware connections.
They must be adapted to make use of the EAI environment,
and the component that allows access to application
services from the middleware messaging layer is called an
adapter. In fact, there are two types of adapter. One acts
as in the server role allowing application services to be
invoked by middleware requests, and returning replies. The
other acts in the client role, publishing events or requesting
information on behalf of the application.
The server adapter for an application is the simplest to
understand. It advertises application functions as services
that can be called from the middleware environment. Its
main role is to accept the request messages from the
middleware and perform the relevant application data
access and/or business processing, typically by calling an
API, and then generate the reply message. Such an
adapter provides stateless middleware services to access
application logic, and contains no business logic itself.
Instead, it should always make use of application business
EAI Success Factors
www.eservglobal.com – [email protected] page 18 | 25
logic via API access, and strictly avoid direct read/write
access to the application database.
Although more complex services could be constructed in
the adapter, this is avoided in favour of publishing a set of
primitive services. These are closely aligned to the native
application functions. By keeping adapters thin we increase
ADAPTABILITY because composite services can be
constructed in the middleware layer using more productive
and manageable EAI tools. Such composite services are
created in order to support specific client views.
The J2EE Connection Architecture (JCA) describes the
components of a server adapter, and how to construct one
with the proper separation of concerns. Proprietary EAI
tools generally ship with an adapter construction kit to allow
customer specific adapters to be developed, whilst some
major application packages may have their own adapter
support for specific middleware tools.
The client adapter for an application has a different set of
concerns. It needs to be aware of internal application
events for two major reasons. The first is so that the
application itself may complete processing with data
sourced from outside the system. For example, when a new
Customer is added in a CRM system and the Account
already exists in the Billing system, the customer billing
address may be extracted and duplicated for easy
reference in the CRM system. The second reason for an
application to be aware of internal state changes is in order
to publish events that may be of interest to other systems.
A problem exists here in that few applications are
instrumented with a view to being probed for internal
events. Hence building an effective client adapter can be a
difficult proposition. There is typically no API support for
this type of application access. The adapter may need to
use database triggers, polling operations, or develop other
techniques, in order to detect application changes. This
tends to make the client adapter more intrusive, and more
dependent upon the application internals – a problem in
itself as it breaks ENCAPSULATION. Developing a client
adapter with the proper architectural concerns is probably
the Achilles heel of EAI, but is also capable of providing the
highest return due to a substantial increase in application
ADAPTABILITY.
Both server adapters and client adapters must be
configurable. Support for configuration control is a key
implementation mechanism of EAI to support FLEXIBILITY
and ADAPTABILITY in the environment. Things like
application connection control, security parameters,
advertised service names, etc, should all be the subject of
configuration. Adapter development and deployment tools
should support this capability.
Adapter implementation guidelines Is it preferable to have just one adapter to cover all the
services of an application, or a series of two or more
adapters providing differentiated access to services? This
is a good question – one that deserves due consideration.
In fact, the answer will differ depending upon whether we
are talking about the development view, or the deployment
view.
� Optimal adapter behaviour requires an understanding of both the development and the deployment view.
Let’s start with the development view. Firstly, we should
recognise that the concerns of a client side adapter are
very different from those of the server side, so at least one
of each type of adapter will be required. Now, assuming
that all functions in the application can be accessed in a
similar manner, eg, via an API, then since we would need
to manage connections to the application, security, and
potentially other long-running context about the adapter,
the principle of PARSIMONY would mean that a single server
side adapter is the best option. Why build two, and
potentially introduce two sources of programming error,
when one will do? The same logic applies to the client side
adapter. So in general we can say you should build one
server adapter and one client adapter to expose all
necessary application function to the middleware.
Now let’s discuss the deployment view. This becomes
more complex since adapter processing has to deal with
the transaction profile and operational characteristics of the
middleware. To understand the issues here it’s best to look
at an example. Let’s say that 99% of application access is
“read only”, simply retrieving information from the
application, and only 1% is an update transaction. Further
let’s assume that concurrent read access is possible, but
that update transactions can only be handled serially.
Based on the development view considerations discussed
above we should have built the adapter with all functions
present. Now consider its run time characteristics. As the
read transactions arrive the middleware will be able to
scale the adapter, that is, it will automatically start new
instances of the adapter process to handle the load. This
SCALEABILITY is based on configuration parameters. Once
some limit is reached the system enters a steady state,
concurrently processing a number of read requests, with
others queued awaiting an available adapter instance. Now
EAI Success Factors
www.eservglobal.com – [email protected] page 19 | 25
we add to this mix an occasional update transaction,
remembering that this must be handled serially. Even if the
application protects itself from multiple concurrent update
attempts, it is a waste of adapter resources to invoke more
than one adapter instance at a time for an update. We will
just end up reducing the read throughput as adapter
processes block awaiting update access. This may look like
a case for some extra development effort but in fact the
solution is quite simply achieved using a configurable
deployment. Instead of just one “server adapter” we
configure “two” adapters for the application. Indeed they
are the same adapter code, and can perform exactly the
same set of functions, but one adapter instance is
configured to advertise and handle all read services, while
the other will only expose that subset which is for update
services. Further, the read-only configuration will permit the
middleware to scale the adapter process instances to
support an increasing transaction load, whilst the update
configuration is limited to a single process instance. This
provides the same smooth handling of read requests, while
update requests are effectively queued based on adapter
resource availability preventing concurrent access to the
application. Although this may appear to be a simple and
perhaps contrived example, the point is that many such
operational scenarios can be effectively addressed through
MANAGEABILITY of configuration parameters.
As an example of a more complex operational scenario,
take the case where some highly sensitive application
functions are restricted to a certain subset of secure users.
A separate adapter instance, with heightened security at
the physical and/or logical level may be more manageable
than a single mixed security adapter when it comes to
doing detailed audits. Again, the ability to partition the
services can be performed via configuration.
The bottom line is that thoughtful adapter development
takes heed of the PARSIMONY principle, whilst the
deployment view leverages the FLEXIBILITY and
MANAGEABILITY of the configuration environment. Adapters
are developed to be “thin” in function. Their role is merely
to project an application’s primitive functions as middleware
services. Through configuration and value add middleware
services, the adapter can be deployed to achieve a number
of functional and operational goals.
Messaging considerations When looking closely at messaging it is very important that
we do not confuse the concerns of the EAI messaging
domain with those of the business domain. Terms such as
“transaction”, “synchronous”, “guaranteed delivery” may
apply to messaging technology in a very different manner
to how they may apply to the business view. The underlying
principles at stake are the same in each case, but the
concerns of each domain will involve a totally different
perspective. The business view is concerned only with the
end-to-end state of applications with respect to the
business processes they are supporting. These concerns
will be covered a little later, but first the focus is on how
these terms apply to messaging.
� Messaging services are core EAI facilities that are necessary but not sufficient for business level integration.
Messaging concerns involve issues regarding
communications between the various middleware services
only. They do not, and indeed cannot, cross over into
current application state and message semantics as they
affect the business view. Further, the communications
model involves message transport and an API for invoking
middleware services. This can add further confusion as the
middleware API itself may have a set of properties that is
different to the underlying transport. For example, the API
may provide a synchronous service and block the caller
sending a request message until a reply message is
received, whereas the transport mechanism may be
asynchronous in nature. Understanding the nuances of the
various interaction models is an important pre-requisite to
understanding how to implement a business transaction,
and ensuring that the language being used, and set of
concerns being addressed, are appropriate.
From an application’s perspective, there is an expectation
that a synchronous API means an interaction that will be
relatively fast, and can support an online business
transaction. That being the case, it is acceptable to block
on the middleware call, as there will be a quick result,
either successful or unsuccessful. This demands a reliable
messaging service, but does not require guaranteed
delivery. If the request cannot make it to the target at the
time it is sent an error is returned instead. Hence the
application is fully responsible for dealing with this sort of
error and any retry or recovery processing involved.
Although this can support both read and write business
transactions, it makes little use of more powerful EAI
features and does not present a strong sense of loose
coupling. In fact, as previously noted, this is the standard
model presented by most distributed object technologies. It
is useful only for time dependent business functions, such
as: an online enquiry or similar read operation where data
must be integrated for presentation purposes, and where
EAI Success Factors
www.eservglobal.com – [email protected] page 20 | 25
the recovery may be via the actual end user retrying such a
presentation layer function; or perhaps an update
transaction, especially involving strong information
interdependence, where the data must be synchronised
across two or more business systems in an atomic
operation.
Once we move on to time independent processing, the
use of guaranteed delivery does become very useful. In
this case the middleware contract is to guarantee the
delivery of the request to the target (given that it accepts
the call at the API level without error). Use of guaranteed
delivery only makes sense where there is asynchronous
processing, and/or asynchronous availability (as
described previously) between the sender and the receiver.
In order to guarantee delivery the middleware cannot “lose”
the message anywhere between the sender and receiver.
Hence it will generally use a form of store and forward, or
persistent queuing, to ensure that a copy of the message
is available until it can be delivered. This can also be
referred to as deferred delivery.
Not losing the message is one thing, but not delivering it
more than once is somewhat trickier. Functions that can be
executed more than once without producing a business
error, such as “set my preferred credit card to VISA” are
said to be idempotent. Of course many business
transactions cannot be duplicated without introducing
problems. For example, a function to “withdraw ten dollars
from my bank account”, if duplicated, will result in an
incorrect balance. It is not idempotent. So the strongest
and most useful middleware contract is actually
guaranteed once only delivery, which means that the
service is guaranteed to be invoked only once for a
successful message delivery. In order to achieve this the
middleware will generally make use of ACID type
transactions to ensure that the message request is
accepted from the caller and placed in a persistent queue
under the umbrella of a single transactional commit
process. Similarly, it will dequeue the message and deliver
it to the receiver under a second transactional commit
process. If it cannot deliver, due to the target being
unavailable for example, then the message is not dequeued
and the delivery can be retried later. So the middleware
delivers the message and removes it from the queue in a
single transaction, or it does neither. Such mechanisms are
significantly more costly to implement, more complex to
manage, and can introduce challenging performance
issues. Hence this mechanism should only be used as is
necessary for important update operations, particularly any
that are not idempotent. However there are definite classes
of business problem that need this support, and having
such powerful techniques within the middleware allows
applications to offload responsibility for data delivery to
specific software built for this purpose.
Sounds perfect – but its not that cosy after all. There are
two major aspects to be further considered, both of which
return to the point that the messaging view and the
business view address a different set of concerns.
The first issue, which is a business concern, is that the
message layer “guarantee of delivery” will not guarantee
that the receiver can process the request without error.
Hence, from the messaging view guaranteed delivery may
work perfectly well, but at the business level the transaction
may fail! This subtlety is often overlooked and to some
extent this has been encouraged with middleware slogans
such as fire and forget. As an architect responsible for
business processing integrity you cannot forget what may
happen to each message sent as it reaches its destination.
It may be processed either successfully or unsuccessfully –
and one cannot simply forget that it may be the latter. The
middleware cannot directly assist in how you deal with
recovery from such business level errors, since the actual
business data sent in the message was delivered without
error and in accordance with the middleware contract.
Although the application may not be able to process the
request validly, there is no way to notify the caller at the
time when the message is received, since processing
between client and server is not synchronised at this point.
An asynchronous business level flow is required to carry
the failure information. These issues occur at an
information level where both the client and server
applications need to be aware of their responsibilities to
ensure business processing integrity.
A second issue is that of delay. Although a message can
be guaranteed to be delivered, there is usually no
guarantee as to when this might happen. If you combine
this with the first issue, namely that the transaction is not
guaranteed to work, then clearly the value of the
“guarantee” at the business layer is severely diminished.
Construction of the appropriate scenarios to deal with long
latency between processing, and potential errors during
processing is an architectural challenge and one of the
reasons that makes integration so complex. After one
system processes some information, then sends a
message for subsequent processing in another system, but
before that message is processed, there is a potential out-
of-sync window within the overall enterprise state view of
the business state. One isolated message may only affect
one customer, or one order, and is not likely to be noticed
EAI Success Factors
www.eservglobal.com – [email protected] page 21 | 25
in the normal state of affairs, and is even less likely if the
out-of-sync window can be made very small. But if there
are hundreds or thousands of messages in this state then
the issue may not be so easily ignored, and the case for
reliable high performing middleware is clear.
The key message here is that EAI provides mechanisms
that are necessary but not sufficient for end-to-end
integration across the enterprise. EAI middleware is
necessary to provide reliable communications, and even
advanced mechanisms such as guaranteed delivery, but it
is not sufficient to ensure business level information is kept
in an up to date synchronised state. This is an enterprise
information issue that requires an architectural solution,
and provides a rationale for a set of EAI business patterns.
Introduction to EAI business patterns One of the most important ways in which we can introduce
a strong dose of simplicity to the complexity of the
integration environment is by the use of patterns. Many
different types of arbitrarily complex integration scenarios
can be decomposed into patterns – common component
interaction sequences – that are the reusable building
blocks of integration design. Once again we need to
differentiate between the messaging view and the business
view in any discussion.
Most discussion on EAI patterns concerns the messaging
view, and begins with basic patterns to perform
synchronous inquiry, synchronous update, asynchronous
inquiry, asynchronous update, event notification, etc, and
then might extend the patterns to more complex patterns
such as composite (multi-object) inquiry, transactional
(multi-object) update, and others. These patterns tend to
focus on the use of the middleware technology at the
programming level, such as what API calls to use, whether
there is blocking or not, how to determine if the request is
accepted, whether any reply will be synchronous or
asynchronous, and other questions regarding the
middleware interactions. These patterns are extremely
useful to guide an implementation but the details will vary
depending on the nature of the technology used. For
example, CORBA patterns may involve additional objects
such as callback objects, while MQSeries may involve reply
queues, dead letter queues, etc.
Of more general interest is the business view of
processing patterns. This is where one or more interactions
can be understood in the context of achieving a complete
business operation that forms (all or) part of an end-to-end
business process. Any business object(s) concerned may
reside within multiple applications that have varying
availability and/or unsynchronised processing. Hence, once
again, it is the information architecture that is most
important here, as the overall mission is to change state in
one or more business objects within the enterprise with
absolute business integrity.
If we begin by assuming that all enterprise business
information is in a state of equilibrium – synchronised and
processing in harmony – then we can define the concept of
an EAI business pattern as the set of EAI interactions that
occur following an application level stimulus that moves or
restores the enterprise business information state to
equilibrium. By implication, in-between states or out-of-sync
conditions can occur between the initial stimulus and the
final flow of the pattern. These concepts are expanded
below.
The “aggregate enterprise state” pattern The most basic pattern is the aggregate enterprise state
involving a read-only or get operation against a business
object that is implemented within a number of application
systems
� Use the aggregate enterprise state pattern to perform read-only enquiries from a number of integrated sources
No state changes are actually involved, so that enterprise
information remains stable. The operation is restricted by
our implementation guidelines to be synchronous from the
view of the client, regardless of whether synchronous
and/or asynchronous messaging is used to fetch the
results. This is because the major use is to support an
online business inquiry, or for an automated process to
retrieve current state, where the state information to be
presented is a combined view that is not limited to the data
held by a specific application. An example might be an
operation such as getCustomerStatus where the
customer information comes from several applications,
such as CRM for the descriptive details, a Billing system for
the current account balance, and perhaps a Financial
system to supply a credit status indicator. The
getCustomerStatus service is not a primitive service of
any application and hence requires EAI support for its
implementation.
For the aggregate enterprise state pattern the operation
begins with an EAI request to getCustomerStatus, and
completes when the reply is received (or an error or
timeout occurs) in the client system. In order to complete
the request the middleware must accept the client request
message, then generate three separate get operations, one
EAI Success Factors
www.eservglobal.com – [email protected] page 22 | 25
against each of the CRM, Billing and Financial systems,
and finally merge the separate replies into a single reply to
the client that satisfies the terms of the
getCustomerStatus service contract. Such processing is
typically performed by a message broker component, but it
can be implemented using workflow, or even a component
purpose-built for the task. Ideally the downstream requests
to each application can proceed concurrently to minimise
the overall response time, and the last reply is used to
complete the client response message.
The operation is idempotent, or safe to retry, since there is
no change in any application state based on the EAI
operations. If the request fails, or times out, then the client
is free to retry the operation as it sees fit. Given all these
factors the message flows involved need only be best
effort. This means no messages need to be stored, or
persisted to disk. Performance is maximised, but the
slightest problem will abort the transaction and produce a
failure condition. However, this is typically the most cost
effective solution for this type of processing as there is little
to be gained by a more robust approach that involves the
time independent delivery of results – the client would
merely time out and discard any stale results anyway.
The “synchronise enterprise state” pattern Whilst it is possible for an application to use an aggregate
enterprise state pattern in order to fetch values to be stored
in another application this is not recommended.
� Use the synchronise enterprise state pattern to replicate information from the system of record to other systems
As previously outlined, the goal is that the middleware
drives all updates from an application integration
perspective, the reason being that it will be simpler (over
time) to establish robust update sequences via standard
middleware mechanisms than via specific application code.
Indeed, the middleware has been specifically delegated the
responsibility for dealing with non-functional issues such as
non-uniform system availability, store and forward
processing, guaranteed once only delivery, error handling,
retries, transactional updates, and workflow support, that
together provide a framework for implementing a robust
integration update scenario.
This philosophical approach to implementation, where the
middleware drives all update activity, establishes a simple
way of approaching new patterns and offers significant
FLEXIBILITY. Here is a simple example. Let’s say that the
Customer record in a CRM system contains various
customer details, including the current billing address, and
that this is the system of record for all Customer
information. All Customer updates take place via the CRM
system, but the Billing system must be informed when the
billing address changes so that when bills are generated
they will be posted to the correct address. Hence, when we
update the CRM billing address we need to send an update
to the Billing system.
A typical approach to this might understand the use case
involved and construct a use case realisation as follows.
Firstly, there is a need to recognise that the Customer
billing address has changed. This requires a modification to
the CRM system to trap an update to Customer where the
billing address is changed. When this application event is
detected there is a need to update the Billing system, so
CRM calls a setAccountBillingAddress service,
passing the AccountNumber held in its Customer record,
and the new address details. This service would be
implemented to update the Account object in the Billing
system. It may even be recognised that this call needs to
be persistent – we cannot tolerate a lost message here or
we will lose the update to Billing, so we define the call to
use a guaranteed delivery service contract. This will work –
but there is a much better way!
Firstly we need to recognise that there is a more general
pattern involved here. Conceptually an application stimulus
(in CRM) has caused an out-of-sync condition in the
enterprise state (as the billing address in CRM is no longer
consistent with that in the Billing system) and an EAI
business pattern is required to restore equilibrium. In fact
what we are doing is replicating some business data, or
state information, from one system – the system of record –
into another system. This is a common integration
requirement to satisfy the application processing view
demanded of typical packages found within an enterprise.
In the example used it is just the billing address that is at
stake, but if we look further ahead there may be another
system, such as a Web portal, that needs to know when
some other Customer details have changed. Modifying the
CRM system each time there is a similar requirement is not
the answer, as this merely increases informational
interdependence. A common approach is required that
maintains the appropriate ISOLATION of components.
The synchronise enterprise state pattern adopts such an
approach by breaking the processing into steps that can be
reused. The first step is for the system of record (CRM) to
simply declare an EAI event, such as
EAI Success Factors
www.eservglobal.com – [email protected] page 23 | 25
notifyCustomerUpdate, when the Customer record is
updated. This event must carry the CustomerID, and may
carry other limited but useful data. At this point CRM’s
integration responsibility for a Customer update is
complete, no matter how many other systems are impacted
or what processing is involved in each system. It has
become a middleware responsibility to drive any
subsequent updates and restore enterprise equilibrium.
This transfer of responsibility – from CRM to the
middleware – is part of the event service contract. This
transfer needs to be guaranteed so that the process of
accepting the CRM event will require that the middleware
store the event before acknowledging receipt. This can be
conveniently achieved under transactional control, so that
persisting the event and notifying CRM that it has been
accepted are part of a single transaction. If for any reason
this process fails to complete successfully, CRM can set an
indicator to that effect and retry the notification event
according to some retry algorithm.
From an EAI perspective, since one or more components
can register interest in the event the actions that can now
take place may be arbitrarily complex. But to follow our
simple example through, an event handler will be
configured to perform the Billing system update in order to
restore enterprise equilibrium in this area. It firstly issues a
getCustomer request to the CRM system, using the
CustomerID extracted from the event to access the
appropriate Customer object. From the reply data obtained
it may be able to tell that the billing address has changed,
but more than likely it cannot. It extracts the new billing
address, as well as the AccountNumber, from the
Customer object reply, and then issues a
setAccountBillingAddress that will update the Billing
system. Workflow, a transaction monitor, or a purpose-built
component could implement this logic. If the Billing system
is unavailable, or the operation fails but can be retried, the
event handler will retry at a later time according to the
middleware policy for recovery. Alternatively, the
setAccountBillingAddress operation could be a
guaranteed delivery service and hence it will eventually
deliver the message as part of its service contract. If all
systems are working normally then the out-of-sync window
is only very small, and the enterprise equilibrium is quickly
restored. However, when problems do occur that prevent
immediate processing, such as lack of immediate
availability of resources, the objective of the synchronise
enterprise state pattern should be to ensure that all
information necessary to complete processing is safely
managed such that the operation can subsequently
complete when the opportunity eventually arises, or an
error is clearly communicated to allow corrective processes
to occur.
There are a number of advantages of this pattern. If any
integration requirements change the impacts are isolated.
For example, if more Customer data is required to be
passed to the Billing system there is no need to change the
CRM system. The event handler will simply extract more
CRM data and pass it on to the Billing system. If a new
requirement means that another system is interested in
changes to Customer information there is no impact to
either CRM or Billing. A new event handler, also triggered
by the notifyCustomerUpdate event, can be deployed
to support the solution. Further, the update part of the
pattern is typically idempotent. It can be safely retried as
the processing intent is to replicate state from the system of
record to a target system, resulting in a synchronisation of
enterprise state. (Care should be taken however, for it may
not be true that all event handlers will support retriable
operations). Finally, all operations involved,
notifyCustomerUpdate, getCustomer, and
setAccountBillingAddress are highly reusable. None
are specific to the pattern itself, but are merely used by the
pattern in a certain fashion to achieve the desired result.
The “perform enterprise task” pattern The patterns described so far are about maintaining or
restoring enterprise state after an internal application state
change. However, since the middleware itself can drive
updates it can be requested to perform operations in order
to bring about new states in the enterprise view. In the
simplest cases a fire and forget mechanism may be
appropriate, where a request for action is made from an
application that has no further interest in tracking the
outcome. However, in the more robust and interesting
situation an action may be requested by an application that
remains interested in the specific outcome. This can use
the perform enterprise task pattern.
� Use the perform enterprise task pattern to request an enterprise level activity where the time to complete the task is unknown
An example is a task to perform a credit authorisation
check against a Customer, where the authorisation
process is unknown, perhaps involving automated and/or
manual interactions, the response time is undefined, and
the only allowable results are authorised,
notAuthorised, or procesFailure. Given the
undefined response time this is clearly not a candidate to
EAI Success Factors
www.eservglobal.com – [email protected] page 24 | 25
be satisfied by a synchronous request/reply, and the
pattern described below is not merely an asynchronous
request/reply sequence, although it combines elements of
both styles of interaction. In our example, the pattern
begins when a performCustomerCreditCheck request
is issued from the CRM system. This request is in fact
implemented as a synchronous operation, and will be
effectively accepted and stored by the middleware as
described above for the CRM event that is part of the
synchronise enterprise state pattern. A failure here means
that retry responsibility remains with CRM. If successful,
then the synchronous reply given is requestAccepted,
and this terminates the request/reply sequence. At this
point there has once again been a transfer of
responsibility to EAI to complete the requested task,
perhaps through a workflow implementation, and within
CRM the state is flagged as
authorisationInProgress.
The performCustomerCreditCheck service
implementation would store the CustomerID as a
reference for the context of the task, and could even issue
a getCustomer request if more details were required. It
would then drive whatever backend interactions were
necessary to perform the task. These interactions do no
form part of the perform enterprise task pattern and the
authorisation process can be considered a black box
implementation. Eventually, when this process is complete,
the task itself completes by issuing a
notifyCustomerCreditCheck event, which carries the
processing results. A middleware event handler is
configured to detect this event and update the Customer
object via a setCustomerCreditAuth request
implemented within the CRM adapter. This completes the
pattern and establishes a new stable state at the enterprise
business information level.
The “perform and monitor enterprise process” pattern A more advanced version of the perform enterprise task
pattern is perform and monitor enterprise process.
Whereas the task pattern appears as a single step, the
process pattern caters for a multi-step implementation,
where tracking is applied to each step.
� Use the perform and monitor enterprise process pattern to start a long running process that will provide status updates of processing stages
As an example, consider a fulfilOrder processing
request that may be issued by an Order Management
system. The request will carry the OrderID and begins in
the same manner by receiving a synchronous reply,
indicating acceptance of the operation by the middleware
and allowing the Order to be flagged as
fulfilmentInProgress. Once again there is a transfer
of responsibility to EAI to complete the requested process
within the service contract.
The actual fulfilment process is again considered a black
box implementation. It can be assumed to be variable in
nature, complex, and involve many steps. If it were to be
implemented using the simpler perform enterprise task
pattern there would be no visibility of progress within the
Order object, making it more difficult to satisfy a query as
to where the order is up to in the fulfilment process. The
perform and monitor enterprise process pattern solves
this by implementing an event at the completion of each
step. As each step completes in the fulfilment process the
process manager issues a processStatusUpdate event
carrying the OrderID amongst other information. A
configured event handler detects this event on behalf of the
Order Management system, and issues a
setOrderStatus request that is implemented in the Order
Management adapter. As each new step completes this
part of the pattern, that generates and handles events, is
repeated until a final fulfilmentComplete state change
can occur. Using this pattern the Order object is kept up to
date by tracking each stage of the fulfilment process as it
occurs, and can satisfy an enquiry as to the current stage
of processing.
The “process multiple objects” pattern There are many cases where processing will target more
than a single business object, and must cater for an
arbitrary list of objects. It is usual that a single business
object can provide the initial context from which the list of
target objects is derived. For example, the Customer
object can provide the initial context for a query against
customer Orders, and hence a listCustomerOrders
service can return, in a single response, a list of Orders
that have been placed by the Customer. The response is a
list of OrderID’s and once again it is preferable not to
overload the list operation by including extra parameters.
Additional information can be obtained through a
getOrder operation at the point that this information is
required, and hence it will always be up to date. The list
operation may also include some sort of filter, or selection
criteria, and while SQL may seem convenient it will also
EAI Success Factors
www.eservglobal.com – [email protected] page 25 | 25
break the ENCAPSULATION principle, so an abstraction in this
regard would be preferable.
In the example above for the perform and monitor
enterprise process pattern, the fulfilment process itself
was considered a black box that applied the required
processing logic to the OrderID passed in the
fulfilOrder request. Let’s assume that each Order was
in fact made up of multiple OrderItem objects, and that
fulfilment processing actually depended on the type of
OrderItem. Then the fulfilOrder service
implementation may use the process multiple objects
pattern and issue as its first step a listOrderItems
request to retrieve a list of ItemID’s for the Order. It now
has responsibility for tracking all OrderItems and their
processing state. For each OrderItem a subsequent
getOrderItem would be issued to get the item details and
depending on returned values, a separate instance of the
perform enterprise task pattern, or perform and monitor
enterprise process pattern may be invoked depending
upon specific requirements. For each step that completes
the events issued would carry both the OrderID and the
ItemID to maintain the appropriate status information in
the Order Management system.
This general pattern can be modified and used for a variety
of other situations that involve processing against multiple
objects. Note that in this pattern the middleware holds only
a set of enterprise identifiers that have been designed to be
robust in messaging flows. It holds no other application
context, such as a cursor or container object. It extracts
information only as is necessary to perform its processing,
and is never considered the system of record of any data,
only reporting processing states as they occur.
Conclusions Many aspects of EAI are not covered in this document.
Topics such as scalability, performance, security, version
control, management, etc, are equally important areas to
understand in any implementation. However, these mostly
descend quickly into technology specific discussions
whereas the thrust of this paper is technology independent,
the goal being to understand the key concepts that apply in
nearly all situations.
Emphasised throughout has been the need to understand
the business goals and to ensure that the EAI
implementation has a direct connection to these business
goals, providing added value and increased flexibility each
time new infrastructure is deployed. Complexity is an
inherent aspect of the integration landscape, but through a
carefully planned and layered management approach,
alongside a simple, consistent and accessible architecture,
the complexity is clearly manageable when placed in the
right hands.
Indeed the essential message to be distilled from this paper
is that a successful EAI strategy depends largely on the
effort applied in developing an enterprise integration
architecture with the appropriate qualities to guide the EAI
implementation in a manner that delivers commercial
outcomes. Although many architectural guidelines have
been suggested, it still remains the case that each
particular enterprise will have its own peculiarities, and as
in many situations, the key to adopting the right approach is
simply to have the right experience.
�
For further information, visit us at www.eservglobal.com,
or please contact us at [email protected]