service oriented architecture: integration, design patterns, toolkit (full course)
Post on 17-Aug-2015
118 Views
Preview:
TRANSCRIPT
SERVICE ORIENTED ARCHITECTURE: INTEGRATION TOOLS,
PATTERNS AND SOLUTION TECHNIQUES
MODULE TOPIC
BASIC PRINCIPLES OF ENTERPRISE INTEGRATION
LEARNING OUTCOMES
Learn the Basic concepts, often utilised in the Context of Integration Architecture
Understand the High Level Overview of the Diferent Architecture Variants (Point to Point , Hub and Spoke, Pipeline, and Service Oriented Architecture.
Understand the different types of Service Oriented integration patterns, Data patterns and the Future Integration technologies.
HIGH LEVEL OVERVIEW OF INTEGRATION
TYPES OF INTERGRATION
Enterprise Integration from an Enterprise perspective can be clustered into the following types
Application to Application
Business to Business
Business to Consumer Integration from the perspective, based on application tiers can be clustered into the following types
Portal Integration
Function Integration
Data Integration
Enterprise Integration Categories
Application to Application
Business to Business
Business to Consumer
Ente
rpri
se P
ers
pe
ctiv
e
INTEGRATION ARCHITECTURE DESIGN TYPES
The following are the common types of Architecture variants which are used for Enterprise Integration:
Point to Point
Hub and Spoke
Pipeline
Service Oriented Architecture
INTEGRATION CONCEPTS
In the framework of Integration it is vital to grasp sufficient understanding about the following concepts as these will be used in the training quite consistently:
1. Application to Application (A2A): This typically depicts integration of applications and systems with each other.
2. Business to Business (B2B): This involves integration with business partners, customers and suppliers processes and applications.
3. Business to Consumer (B2C): This involves the direct integration of end customers into internal corporate processes, for example via means of internet technologies.
4. Integration Types: Integration projects typically are split into integration portals (User Interface Level), shared data integration (Data Level), and shared function integration (Functional Level).
5. Enterprise Application Integration (EAI): This allows for the unrestricted sharing of data, and business processes amongst any connected applications.
6. Messaging/Publish/Subscribe, Message Brokers and Messaging Infrastructures:
These are all integration mechanisms which typically involve asynchronous communication using message.
7. Enterprise Service Bus (ESB): This is an integration infrastructure that is commonly used to implement Enterprise Application Integration (EAI), main role of the ESB is decouple client
applications from services.
8. Middleware: Most technological implementation of EAI systems is predominantly based on the Middleware. Middleware is often referred to as the Communication Infrastructure.
9. Routing Schemes: Information routed in different ways within a network
Unicast (1:1 relationship)
Broadcast (all destinations)
Multicast (1:N)
Anycast (1:N-most accessible)
NOTE
The Legacy technical environment of most organisations today consists of A2A, B2B and B2C
TYPES OF INTEGRATION PROJECTS
1. Information Portal
Information Portals:
Consolidates information from multiple sources into a unified display.
Divides the Screen into several zones, enabling each individual screen to depict data collated from a different system.
Examples: Verify the Status of an Order or Address.
2. Data Replication
Shared Data: This usually consists of Shared databases, file replication and data transfers. Shared Database eg, Customer addresses maybe required in an order system, CRM system and a Sales system. This type of data can be stored in shared database to reduce redundancy and synchronization problems. File Replication eg Systems often have their own local data storage. This means that any centrally managed data (in a top level system) has to be replicated in the relevant target databases, updated and synchronized regularly. Data Transfers: These are a special form of data replication in which data is transferred in files.
3. Shared Business Functions:
Shared Business Functions
Aggregates functionality from diverse systems.
Example: Multiple systems may need to check whether a social-security number is valid, whether the address matches the specified postal code or whether a particular item is in stock. It makes business sense to expose these functions as a shared business function that is implemented once and available as a service to other systems.
4. Enterprise Application Integration (EAI) Definition of EAI: “The Use of EAI means unrestricted sharing of data and business
processes among any connected applications) (Lincthicum 2000)”
Business Perspective:
Creates competitive advantage:-All applications integrated Technical perspective
Enables heterogeneous applications, functions, and data to be integrated.
Enables sharing of data, integrates business processes across all
applications.
Core focus: technical integration of application and system landscape.
Middleware products used as integration tools
Adapters enable information and data to be moved across the technologically heterogeneous structures and boundaries.
Limitations
Lack of Service concept
Service concept and standardisation introduced by : Service Oriented Architectures.
5. Service Oriented Architecture (Definition) “SOA is now moving the concept of integration into a new dimension. Alongside the classic “horizontal” integration, involving the integration of applications and systems in the context of EAI, which is also of importance in an SOA, SOA also focuses more closely on “vertical” integration of the representation of business processes at an IT Level (Frosche, Reinheimer 2007)
LEVELS OF INTEGRATION
1. Integration on Data level
Data Exchanged between two different systems
File Transfer Protocols mostly used.
Direct connection of databases most widespread. Eg oracle databases, exchange data via database links or external tables
2. Integration on Object level
Enables systems to correspond by calling objects from outside the applications involved.
3. Integration on Process level
Communication linking the different applications takes place via workflows, which make up a business process.
Messaging Key Features
Introduced 1970s
Mechanism for Synchronizing processes
Message queues: enable persistent messages, for asynchronous communication and the guaranteed delivery of messages.
Decouples the produce and consumer with the only common denominator being the Queue.
***Insert Diagram***
Properties and Quality attributes:
1. Availability
In Server failure, the client can forward message to another server.
Physical queues with the matching logical name can be replicated across several instances.
2. Failure Handling
Client can’t send the message to another server.
3. Modifiability
Clients and Servers loosely coupled.
Clients and Servers can be adapted without chief impact on the full system.
Canonical Message format can condense or remove dependency between producer and consumer.
4. Performance
Handle Several thousands of messages per second
5. Scalability
Replication and clustering make messaging a highly scalable solution
Publish/Subscribe
Evolution of Messaging
Secure delivery via persistent queuing
Allows for Many to Many messaging
Publisher sends message in message queue, and message queue itself performs the distribution
***Insert Diagram*** Properties/Quality Attributes:
1. Availability
Clients capable of sending messages to another server in the case of sever failure
2. Failure Handling
Clients can send messages to another replicated server
3. Modifiability
Clients and Servers loosely coupled.
Clients and Servers can be modified without major impact on whole system.
Canonical Message format can reduce or remove dependency between producer and consumer.
4. Performance
Process thousands of messages per second
Non reliable messaging faster than reliable messaging
5. Scalability
Topics can be replicated across server clusters
Message Brokers 1. Key Features
Central Component
Accountable for secure delivery of messages
Possess logical ports: Receiving and Sending Messages
Transports messages between Sender and Publisher
Implementing a hub and Spoke Architecture, Message Routing and the Transformation of Messages
***Insert Diagram***
2. Hub and Spoke Architecture:
Broker Acts as central message hub
Connections to broker made via adapter ports supporting relevant message format.
3. Message Routing:
Processing logic used to route messages
Routing decisions: hardcoded or specified declarative way
Messages routed based on content: content based routing
Messages routed based on specific values or attributes: attribute based routing.
4. Message Transformation
Message logic transforms message input format into message output format
Properties and Qualities:
1. Availability
High Availability is mostly dependent on brokers being virtual and operated as clusters.
2. Failure Handling
Possess different types of input ports for validation (Message format) of incoming messages.
If Broker failure occurs, clients can send message to another replicated broker.
3. Modifiability
Brokers split transformation logic and routing logic
Modifiability highly improved as logic has no influence on sender and receiver
4. Performance
Due to Hub and spoke approach, brokers can potentially be a bottleneck
Especially in scenarios of high volume of messages, large messages and complex transformations.
5. Scalability
Broker clusters allow for high levels of scalability
MESSAGING INFRASTRUCTURE
Instrumental for sending, converting and routing data between different applications with different operating systems.
Logical components:
1. Producer:
The application which sends messages to the messaging queue
2. Consumer:
The application which is interesting in specific messages
3. Local Queue:
Local interface of the messaging infrastructure
Each message sent to a local queue is received by the infrastructure and routed to one or more receivers.
4. Intermediate Queues:
Utilised when message cannot be delivered or copied from several receivers
5. Message Management:
Includes sending, routing and converting data, combined with unique features such as guaranteed delivery, message monitoring, tracing individual messages, and error management.
6. Event Management:
The Subscription mechanism is controlled through special events.
ENTERPRISE SERVICE BUS (ESB)
Infrastructure that be pursued to implemented an Enterprise Application Integration.
Main remit of ESB is to decouple client applications from services, see diagram
Encapsulation of services by the ESB means that client application does not need to know anything about location of services or the communication protocols used to call them.
Major SOA vendors now offer specific ESB products, see core functions in below diagram
***Insert Diagram here****
ENTERPRISE SERVICE BUS STRUCTURE
Structure of ESB, **Insert Diagram Here**** Different SOA vendors may use different names for the naming of SOA products but the products provide the following types of common functions as the de facto
Routing and Messaging, as base services
Communication bus, : enabling integration of diverse systems using pre defined adapters
Transformation and Mapping services for a wide range of different format conversations and transformations.
Mechanisms for executing processes and rules
Monitoring functions
Development tools for modelling processes, mapping rules and message transfers
Standardised Interfaces: ie JMS (Java Messaging Speficiatons), JCA (Java Connector Architecture), and SOAP /HTTP
MIDDLEWARE Key Features
Communication Infrastructure
Enables Communication between Software Components
Core to the all Enterprise Application Integration Systems.
MIDDLEWARE COMMUNICAITON METHODS
Five Categories:
1. Conversational (Dialog-Oriented):
Used in real time systems
Components interact synchronously
2. Request/Reply
Used when application needs to call functions from another application
3. Messaging Passing
Enables exchanging of messages
4. Publish/Subscribe
Two Roles Involved in non directed communications:
Publisher sends the message to the middleware
Subscriber subscribes to all the relevant types of messages
Middleware ensures all subscribers receive corresponding messages from a publisher.
See table below
MIDDLEWARE BASE TECHNOLOGIES
Base Technologies Types
1. Data Oriented Middleware: This category of technology principally focuses on the integration or distribution of data to different RDMBS using synchronization mechanisms that are significant.
2. Remote Procedure Call: Implementation of the classic client/server approach.
3. Transaction Oriented Middleware: The Transaction concept (ACID-Atomicity, Consistency, Isolation, and Durability) is put into effect using this type of middleware. A Transaction is a finite series of atomic operations which have either read or write access to a database.
4. Message Oriented Middleware: Information is exchanged by means of messages, which are transported by the middleware from one application to the next. Message Queues are most often utilised.
5. Component Oriented Middleware: This represents diverse applications and their components as a complete system.
ROUTING SCHEMES
Information can be routed in the following ways :
1. Unicast (1:1 relationship)
Sends data packages to a single destination
1:1 Relationship between network address and network end point: See Diagram:
2. Broadcast (All destinations)
Sends data packets in parallel to all destinations
Data Packets can also be sent serially (Performance impacted)
1: N relationship between the Network address and the network end point.
See Diagram:
3. Multicast
Sends Packets to a specific selection of destinations
Destination set is a subset of all possible destinations
1: N relationship between the Network address and the network end point.
See Diagram:
4. Anycast
Distributes information to destination computer that is the most nearest or accessible.
1: N relationship between the Network address and the network end point.
Only one end point is addressed at any given time See Diagram:
INTEGRATION ARCHITECTURE VARIANTS
1. Point to Point Architecture
Collection of independent systems which are connected via network
No central database
Each system has its unique data storage
New systems are connected to existing ones
Complex set of interfaces: n*(n-1)/2 interfaces See Diagram:
2. Hub and Spoke Architecture
Minimises growing complexity of interfaces
Uses central integration platform to exchange messages between systems
Central Integration Platform transforms, routes an converts messages
Used for complex data distribution
See Diagram:
3. Pipeline Architecture
Independent Systems coupled with a Message Bus
Applications have local access to a Bus Interface/Capability
Used for high performance requirements eg event driven architecture
1:N data distribution eg broadcasting
N:1 database eg data warehouse
Independent systems integrated by way of a message bus
Bus system dependable for message distribution
Comparable to Hub and Spoke Architecture
Middleware products installed and operated on Central Servers
All Communications standardised
See Diagram:
4. Service Oriented Architecture
Enterprise Service Bus is used for central integration component for service calls.
Enterprise Service Bus transforms, routes an converts messages
Requires SOA strategy and governance
High start-up and infrastructure costs See Diagram:
PATTERNS: FOR ENTERPRISE APPLICATION INTEGRATION
The following are the basic patterns which are used for implementation of EAI and EII platforms: Direct Connection Pattern
1. Direct Connection Main Features
Depicts the basic type of direct interaction between two applications.
Based on 1:N topology (Point to Point Connection)
Connection rules encompass: data mapping rules, security rules, and availability rules.
See diagram
2. Direct Connection Logical components:
Source Applications
Connection
Connection Rules
Target Application
3. Direct Connection Strengths
Loose Coupling
Receivers don’t need to be online
4. Direct Connection Weaknesses
Intelligent Routing not supported
Decomposition/Re-Composition not supported
Several Point to Point Connections results in spaghetti configurations
5. Direct Connection Practical Uses
Real time one way message flows are largely supported
Real time request/reply message flows are significantly supported.
Significantly reduced latency of business events Broker Pattern
1. Broker Main Features
Based on the direct connection pattern, extends it to a 1 : N topology
Permits individual requests from source applications to be routed to several target applications.
Minimises the 1:1 Connections Required.
Connection Rules characterised as Broker Rules (consequently distribution rules are separate from application logic)
Responsible for composition and decomposition of interactions
Uses direct connection pattern for connection between applications.
Forms base for the Publish/Subscribe message flow. See Diagram
2. Broker Logical Components
Source Applications
Broker component (Supports Message routing, enhancement, transformation,
decomposition, re-composition of messages)
Target Applications
3. Broker Strengths
Permits several different applications to interact simultaneously.
Impact on existing applications reduced
Provisions the availability of routing services, consequently source application doesn’t have dependency to know target applications.
Router modification is minimal when target application location is changed.
Decomposition/Re-composition services are available which
subsequently permit an individual request to be sent from one source to several target applications.
4. Broker Weaknesses
For the purposes of Routing, Decomposition and Re-Composition; Logic has to be incorporated into the Broker.
5. Broker Practical Uses
Source Applications has the potential and capability of interacting and sending messages to one or more Target Applications.
Complexity is reduced as a Hub and Spoke Architecture is implemented
Flexibility and Maintainability
No dependency on the Interfaces of the Target Applications. Router Pattern
1. Router Features
Message routed to only one target application
Makes Decision on Interaction and target application that will receive the message.
Only allows 1:1 Connections See Diagram
2. Router Logical Components
Source Applications
Router ( Provides all the business rules)
Target Applications
New/Existing Target Applications
3. Router Strengths
The Impact on existing applications are minimised
Permits several applications interaction
Modifications minimised on Router when target application is moved
4. Router Weaknesses
Unable to send multiple requests simultaneously to multiple target applications.
No Decomposition and Re-Composition of messages.
5. Router Practical Uses
Single Application able to interact with one of the several target applications.
Complexity reduced on Hub and Spoke Architecture in comparison to
Peer to Peer Architecture.
Source Application is decoupled from target applications and interfaces.
PATTERNS FOR DATA INTEGRATION
Data Integration is implemented based on the following patterns Federation Pattern
1. Federation Main Features
Supports structured and unstructured data
Supports read only and read/write accesses to the underlying data sources.
Permits access to diverse data sources and manufactures the impression that these sources are a single, logical data source. Achieved by the following: o Single consistent interface is exposed to application o Interface is translated to the interface required by the underlying
data o Consolidates data into Single result set when returned to the User.
See Diagram
2. Federation Logical Components
Calling Application
Metadata Repository
Adapters
Federation building block
Source Applications
3. Federation Practical Uses
Distribution of Data across multiple databases for technical or commercial reasons.
Highly effective as a data integration pattern in the initiation of the following requirements: o Rapidly changing data demands Near Real Time Access
o Consolidated Copy of data is not feasible and limited due to technical or regulatory constrains
o Read/Write access possible Population Pattern
1. Population Features
Collates data from several sources and applies it to a database
Based on the Read Dataset Process Data write dataset model (Corresponds to the Extract Transfer and Load Process)
See Diagram
2. Population Logical Components
Target Applications.
Population Component: reads data from the source application and consequently writes data to a data source in the target application. Note Rules for extracting and loading data can potentially range from being simple to being a complex process. Metadata is used to describe these rules.
Source Applications- contain information needed by the Target Application.
3. Population Uses
Only read access to the derived data in the target application is possible.
User must be given only relevant specific data
A Specialised copy of existing data (derived data) is needed: o Subsets of existing data sources o Modified version of an existing data source o Combinations of existing data sources
Synchronization Pattern
1. Synchronization Features
Also known as Replication pattern
Enables bidirectional update flows of data in a multi copy database environment.
The Two way synchronization element of this pattern makes it very different from the “one way” capabilities provided by the Population pattern.
See Diagram
2. Synchronization Logical Components
Target Applications: have dependency for information which they do not possess from the source application.
Synchronization component (data flows both directions)
Source Application: have information which Target applications are dependent upon.
3. Synchronization Uses
A Specialised copy of existing data (derived data) is needed: o Subsets of existing data sources o Modified version of an existing data source o Combinations of existing data sources
PATTERN FOR SERVICE ORIENTED INTEGRATION
The Service Oriented integration is based on two prominent fundamental patterns: Process Integration
1. Process Integration Features
Extension of the Broker Pattern 1: N Topology
Target applications significantly simplify the serial execution of business services.
Based on the interaction of the source application, Permits large scale orchestration of serial business processes
Serial sequence defined using process rules, consequently enabling decoupling from the process logic (flow logic and domain logic) of the individual applications.
Rules define the control and data flow and also the permitted call rules for each application.
All Process data is stored in Individual results databases. See Diagram
2. Process Integration Logical Components
Source Applications
Serial Process Rules (Includes Routing Queries, Protocol Conversion, Message Broadcasting, Message Decomposition and Re-Composition
Process logic database (consists of serial process flow, control and data flow rules.
Target Applications ( responsible for implementing the business services)
3. Process Integration Strengths:
Flexibility and responsiveness is largely improved due to
implementation of end to end process flows.
Externalisation of Process logic from individual applications enhances responsiveness of an organisation. 4. Process Integration Weaknesses
User interaction is not possible.
Only direct and automatic processing is supported
Parallel processing not supported.
5. Process Integration Uses
Support for end to end process flows which use the services provided by the Target applications.
Flexibility and responsiveness of IT is significantly increased by externalizing process logic from individual applications.
6. Process Pattern Variants
The Parallel Process pattern extends the simple serial process orchestration provided by the serial process patterns, by supporting concurrent execution and orchestration of business service calls.
The external business rules variant adds the option of externalizing business rules from the serial process into a business rule engine, where they can be evaluated.
Workflow Integration
1. Workflow Integration Features
Depicts the extension of the process integration pattern, see diagram
Extend the capability of simple serial process orchestration to include support for user interaction during execution of individual process steps.
Supports classic workflow
EVENT DRIVEN ARCHITECTURE (EDA)
1. EDA Integration Perspective
Can be used independently or combined with Service Oriented Architecture
Largely uses publish/subscribe mechanism
Principle of loose coupling is of key importance towards the formation of business processes.
2. Event Processing
Concept embedded within EDA, see diagram
Event processing technologies, for example used in algorithmic trading in stock markets.
Three types: Simple Event Processing, Event Stream Processing, and Complex Event Processing.
Simple Event Processing (SEP): Events which occur individually or in streams, and likely to trigger processes in the systems which are receive the message. Java Messaging Service can be depicted as a typical example of SEP
Event Stream Processing (ESP): involves processing streams of incoming messages or events. Core focus is on event stream and most ESP systems have sensor capabilities that detect large events and use filters and other processing methods to influence the stream of messages or events.
Complex Event Processing (CEP): Part of ESP, core focus is recognising trends in large number of events and their message contents that could be distributed across data streams. Eg; tracing credit card fraud.
See Diagram CEP
INTEGRATION TECHNOLOGIES
Grid Computing/Extreme Transaction Processing
1. Grid Computing Features Definition “A grid is an infrastructure enabling the integrated, collaborative use of resources which are owned and managed by different organisations (Foster, Kesselmann 1999)
Every computer in the grid is equal to all others
Computer powers can be significantly enhanced in Grids by adding more computers or by combining grids into Meta grids.
Highly scalable
Broken down into: o Data Grids o In-Memory Data Grids o Domain Entity Grids o Domain Object Grids
2. XTP Features
Distributed Storage Architecture
Permits parallel application access
Designed for distributed access to large volumes of data
3. Core Functions
Distributed Caching and Processing: o Data Distributed across all the Nodes on the Network o Automatic Failover and Load Balancing o Transactional Security
:
Event Driven Processing:
o Operations and transactions can take place in parallel across all
physical nodes o System can efficiently react to data changes due to single event
processing Data Grids
1. Data Grid Features
System made of several distributed servers
Distributed Servers consolidate efforts and work together to retrieve shared information and process shared data and distributed operations on the data
2. In Memory Data Grid
Variant of Data Grids
Shared Information is saved and stored locally in memory in a distributed cache.
Distributed Cache is distribution of data
Information is distributed evenly across all the available servers
An in-memory data enables application to achieve shorter response times by storing user data in memory in appropriate application formats.
Data in grid is replicated; buffering can be used to accommodate database failures. 3. Domain Entity Grids
Distribute the domain data of the system across several servers
Domain data is extracted from several different data sources
Performs role of aggregator/assembler enabling high performance access to aggregated entities.
4. Domain object grids
Distributes runtime components of the system (the applications) and their status (process data) across severs
DISTRIBUTION TOPOLOGIES
Distribution Topologies/Strategies categories
1. Replicated Catches
Data and Objects are equally distributed across all nodes in the cluster
Node is responsible for determining how large the available data volume can be.
2. Replicated Catches Advantages
Maximum access performance is the same across all the nodes.
All nodes access local memory, referred to as zero latency access.
3. Replicated Catches Disadvantages
Distribution of data across all nodes significantly impacts timescales.
Availability of memory of the smallest server determines the capacity limit.
In the event of transctionality, should a node be locked, every node must agree.
In the event of a cluster error, all the stored information (data and locks) can be potentially lost.
4. Partitioned Caches
Provides solution in relation to the memory and communications for the disadvantages of the replicated caches:
If this distribution strategy is implemented following attributes must taken into context:
Partitioned: o Data distribution across the cluster is very transparent and there
are no possibilities of any overlaps in responsibilities for data ownership.
o A unique node is given the role and responsibility for a unique
element of the data hence it manages the master dataset.
o Size of available memory and computing power grows exponentially as the cluster grows.
o All operations carried out on stored objects require only a single network hop.
o Only one server involved that manages the master data, and this also stores the backup data in the case of a failover.
Load Balanced
o Distribution algorithms certify that the information in the cache is distributed in the best possible way across the resources in the cluster and consequently provide transparent load balancing.
Location Transparency o Grid infrastructure responsible for adapting the data distribution as
effectively as possible. o Heuristics, configurations and exchangeable strategies are used for
this purpose.
Agents
1. Agent Features
o Autonomous programs; triggered by an application and executed on
the information archived in the grid managed by the grid infrastructure.
Execution Patterns
1. Targeted Execution:
o Agents can be executed on one specific set of information in the data grid.
o A unique key is used to identify to the information set. o In view of the execution, the Grid infrastructure is ultimately
responsible in searching and identifying the best location in the cluster.
2. Parallel Execution
o As with the targeted execution, agents can be executed on one specific set of information in the data grid.
o A unique key is used to identify to the information set. o In view of the execution, the Grid infrastructure is ultimately
responsible in searching and identifying the best location in the cluster.
3. Query based Execution
o Extension of the Parallel execution pattern. o Unique Keys are not utilised to identify number of information sets. o Information sets are identified by filter functions in the form of a query
object.
4. Data grid wide execution
o Agents are executed in parallel on all the available information sets in the grid.
o Specialised form of the query based execution pattern in which a NULL query object is passed.
5. Data Grid Execution
o To pursue real time computations, in addition to the Scalar agents,
cluster wide aggregation can also be run on the target data. o Predefined functionality such as average, max, min etc is provided by
most products
6. Node Based Execution
o Agents can be executed on specific nodes in the grid o An Individual node can be specified. o Agents could also be run on a defined subset of the available nodes, or
on all the nodes in the grid.
Grid Technology Practical Uses
1. Distributed, transactional data cache (domain entities)
o Application data can be stored in a distributed cache and with transactional access.
2. Distributed, transactional Object cache (domain objects)
o Application objects can be stored in a distributed cache and with
transactional security
3. Distributed, transactional process cache (process status)
o Process objects can be stored in a distributed cache and with transactional security
4. SOA Grid
o Business Process Execution Language (BPEL) processes are distributed in serialised form (hydration) throughout the cluster, and can be processed further on another server following de-serialisation. Consequently resulting in highly scalable BPEL processes.
5. Data access virtualisation
o Grids allow virtualised access to distributed information in a cluster.
6. Storage access virtualization
o Information sourced from heterogeneous sourcing systems can be stored in the distribution cache in the appropriate format.
7. Data format virtualization
o Information sourced from heterogeneous sourcing systems can be
stored in the distribution cache in the appropriate format.
8. Data access buffers
o Applications (decoupled) are not impacted by failover events on
different target systems.
o Access to data storage systems is encapsulated and buffered enabling it to be transparent for the application.
9. Maintenance window virtualization
o Grids support dynamic cluster sizing o Servers can be added or removed from the cluster during runtime
10. Distributed master data management
o In real time environments bottlenecks can occur in central master data
applications. This can be offset by distributing the master data across a data grid.
11. Notification service in an ESB
o Message based system is replaced in a service bus by grid technology
12. Complex real time intelligence
o Amalgamates functionality of Complex Event Processing (CEP) and data grids.
o Enables Highly Scalable analysis applications which provide complex pattern recognition functions in real times scenarios.
o Event Driven Architecture with CEP engine, message transport, pre analysis and pre-filtering is based on grid technology.
o Infrastructure components of the grid also responsible for load balancing, fail safety, and availability of historic data from data marts the in-memory cache.
top related