bologna, 19th-20th february 20045th plenary tapas workshop jboss clustering and configuration...
Post on 28-Dec-2015
214 Views
Preview:
TRANSCRIPT
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop
JBoss Clustering and Configuration
Service Implementation
Giorgia Lodi (lodig@cs.unibo.it)
Department of Computer Science
University of Bologna
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 2
Summary
Configuration Service JBoss Clustering
• load balancing and fail-over mechanisms Clustering Experiments Current work and Future works Concluding Remarks References
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 3
Configuration Service (1/2) Configuration service exercises
“coarse-grained” configuration control • It can manage such macro resources as
host computers• It will not be able to view and manage the
activities of the resources at a finer granularity than that
JVM does not allow a high-level programmer to manage parameters such as CPU utilization, memory usage, and disk space usage
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 4
Configuration Service (2/2)
• It will not reserve and allocate a certain amount of CPU or memory or disk for a particular application
• It will not change the scheduler of the machine as well
• It is responsible for setting up the platform and distributing the load among the hosts
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 5
JBoss Clustering Service (1/7)
Clustering service• useful for meeting such non-functional
requirements as availability and scalability• provides load-balancing and fail-over
services
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 6
JBoss Clustering Service (2/7)
JBoss cluster: set of nodes• each node: instance of JBoss AS• several nodes in cluster can be grouped to
form a “partition” partition
identified by a unique name in cluster partition name: defined in the AS configuration
files
• a node may belong to one or more partitions (i.e., partitions may overlap)
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 7
JBoss Clustering Service (3/7)
JGroups
HAPartition Framework
HAJNDI
HARMI
HAEJB
DistributedState
DistributedReplicantManager
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 8
JBoss Clustering Service (4/7)
JGroups open source project• reliable group communication toolkit written in Java
Highly Available Partition (HAPartition) • abstracts the communication layer • provides access to basic communication primitives• gives informational data (e.g. the cluster name, the
name of the node, information about the membership of the cluster)
• two categories of primitives take place: the state transfer RPC calls
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 9
JBoss Clustering Service (5/7)
Distributed Replicant Manager (DRM)• responsible for managing replicated objects through
a given partition assume to manage a list of stubs for a RMI server.
DRM allows sharing these stubs in the cluster and knowing to which node a stub belongs
Distributed State Service (DS) • manages replicated states (e.g. Stateful Session
Bean states, HTTP sessions) • allows sharing a set of dictionaries in the cluster
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 10
JBoss Clustering Service (6/7)
HA JNDI • global, shared, cluster-wide JNDI Context• used by clients when they want to lookup and bind
objects HA RMI
• responsible for implementing the smart proxies of the JBoss clustering
HA EJB • provides mechanisms to cluster the EJBs (i.e.
Stateless Session Bean, Stateful Session Bean, Entity Bean)
Message Driven Beans: no cluster version currently implemented by the JBoss 3.x
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 11
JBoss Clustering Service (7/7)
Supports both so-called “homogeneous” and “heterogeneous” deployment (in the cluster)• homogeneous: each node contains the
same beans• heterogeneous: each node contains
different set of beans
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 12
Homogeneous Deployment
Realized using JBoss farming service• application copied into JBoss farm directory
JarJarCopy file in /farm Node 1Node 1
Node 2Node 2
Node 3Node 3
Cluster
JarJar
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 13
Heterogeneous Deployment
No available documentation for that Realized defining to which node an EJB belongs Not recommended
• distributing transaction is a problem requires propagation of Tx Context and synchronization
of the transaction monitors across nodes requires distributed notifications it is currently missing a distributed transaction
manager• it has deep performance impact
Conclusion (in every JBoss documentation)• USE HOMOGENEOUS DEPLOYMENT!!
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 14
Load Balancing Policies (1/3)
JBoss adopts the third model• motivations:
no single point of failure
load balancing activity can only die when client application dies
performance cost minimal (client pays the full price)
ClientApplication
ClientApplication
ClientApplication
Server 1 Server 2 Server 1 Server 1Server 2 Server 2
1) 2) 3)
Server Based IntermediaryServer
Client Based
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 15
Defined at deployment time into Deployment Descriptors (DDs)
Load Balancing Policies (2/3)
<jboss><enterprise-beans> <session>
<ejb-name>MySessionBean</ejb-name><clustered>True</clustered><cluster-config> <partition-name>DefaultPartition</partition-name>
<home-load-balance-policy> org.jboss.ha.framework.interface.RoundRobin </home-load-balance-policy> <bean-load-balance-policy> org.jboss.ha.framework.interface.FirstAvailable
</bean-load-balance-policy></cluster-config>
</session></enterprise-beans></jboss>
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 16
Load Balancing Policies (3/3) Four load balancing strategies already included
into JBoss clustering service• Random-robin, Round-robin, First available, First
available identical all proxies Using the RMI mechanism (HA RMI)
• clients get references to remote EJB components using the RMI mechanism
• a stub (i.e. proxy) to objects is downloaded into the client
the proxy code includes the clustering logic (i.e. load balancing and fail-over)
the proxy contains the list of target nodes the client can access and the load balancing policy
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 17
Fail-over Mechanism If the cluster topology changes
• the JBoss server will piggyback a new list of target nodes
The proxy, before returning the response to the client code• unpacks the list of target nodes from the
response• updates the list with the new one and
returns the real invocation result to the client code
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 18
Positioning the clustering logic
Clustering logic (i.e. load balancing and fail-over) located in the last interceptor of the client-side proxy
Client
Client JVMIn
voca
tion
H
andl
er
Sec
urit
y
Tra
nsac
tion
Clu
ster
ed
Inte
rcep
tor
Invokers to target nodes
Run time generated interfaces
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 19
What we are investigating…
Currently, we are investigating • use of homogeneous deployment• use of notion of “partition” for
configuration/reconfiguration purposes
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 20
Clustering Experiments (1/2)
Very simple application implemented
DBDB
EJB ContainerEJB Container
AccountAccountManagerManager
StatementStatementManagerManager
AccountAccount
StatementStatement
Application Client Application Client ContainerContainer
Application Client
Session Beans Entity Beans
Entity relationship
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 21
Clustering Experiments (2/2)
JBoss AS JBoss AS
AccountApplication
AccountAccountApplicationApplication
ClientClient
Cluster
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 22
Clustering Experiments: Results
The state is correctly transferred among the nodes of the cluster
Each update is seen in every node of the cluster
Cluster membership correctly updated and seen by the cluster nodes
Fail-over guarantees that application instances continue to operate in survived nodes of the cluster
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 23
JBoss Clustering Limitations
Synchronization • no distributed locking mechanisms for
synchronization of concurrent Entity Beans these beans can only be synchronized by using
locking at the database level Missing cluster-wide configuration
management• cluster administration: connect directly to each
node’s JMX console Load balancing
• current implementation embodies non-adaptive strategies, only (i.e. none of them considers dynamic load conditions of the machines in the cluster)
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 24
Current work (1/2)
Experimental assessment of the extent to which JBoss can be programmed, so as to distribute the computational load dynamically at run time• extension to JBoss load balancing mechanism
integration of dynamic/adaptive load balancing strategies, to be defined at deployment time (for the time being)
• testbed: cluster of machines, running JBoss, which will be subjected to variable load conditions (e.g. use of ECPerf for simulation purposes)
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 25
Current work (2/2) Configuration service driven, run time
management of faulty/overloaded nodes• assume application homogeneously deployed in
JBoss (partition of) cluster (i.e., each node runs a full instance of the application)
• node failure JBoss fail-over mechanism guarantees that surviving
application instances continue to operate normally in contrast, TAPAS configuration service guarantees
that new node replaces the failed one (and state of failed node is restored)
motivation: assume partition consists of two nodes, only, …
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 26
Future works (1/2) Current JBoss
• cluster used completely (i.e. all its nodes) when deploying the application (i.e. no dynamic Farming Service)
application components cannot be deployed in a sub-set of nodes of the initial cluster
TAPAS Configuration Service selects sub-set of nodes (of the cluster) on which deploying and running applications
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 27
Future works (2/2)
Geographical clustering• evaluation of VPN technology to support
geographically clustered AS• experimental evaluation of geographically
clustered AS
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 28
SLA Interpreter Two phases
• pure parsing process using either SAX or DOM XML parsers
final result: Java object with as many attributes as the elements of the original XML document
• Java object processed again to obtain low-level QoS requirements (it may require statistical analysis)
Currently• first phase (i.e. SLA parser) implemented
using DOM XML parser as applied in all JBoss source code
using old SLA version• SLA file included into META-INF application directory
with DDs
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 29
Concluding Remarks
SLA parser must be re-viewed with the new SLA
If possible, use of distributed transactions from Arjuna • “overcome” JBoss problems for the
heterogeneous deployment?
Bologna, 19th-20th February 2004 5th Plenary TAPAS Workshop 30
References JBoss group “Feature Matrix: JBossClustering (Rabbit
Hole)”, 19th of March 2002. S.Labourey and B.Burke “ JBoss Clustering 2nd
Edition”, 2002. http://www.javagroups.com/ G.Ferrari and G.Lodi “Implementing the TAPAS
Architecture”, TAPAS Internal Draft, December 2003. S. Labourey “Load Balancing and Failover in the JBoss
Application Server”, 2001-2004 IEEE Task Force on Cluster Computing, Available at http://www.clusteringcomputing.org/
B.Burke and S.Lauborey “Clustering with JBoss 3.0”, ONJava.com, October 2002.
top related