j2ee performance and scalability bp

71
1 Best Practices For Performance and Scalability With J2EE Chris Adkin 16 th February 2009 Last update 5 th April 2009

Upload: chris-adkin

Post on 30-Apr-2015

4.165 views

Category:

Technology


2 download

DESCRIPTION

A presentation on best practices for J2EE scalability from requirements gathering through to implementation, including design and architecture along the way.

TRANSCRIPT

Page 1: J2EE Performance And Scalability Bp

1

Best Practices For Performance and Scalability With J2EE

Chris Adkin

16th February 2009Last update 5th April 2009

Page 2: J2EE Performance And Scalability Bp

2

Introduction

There is a wealth of material on the internet and in the blogosphere on J2EE best practices for performance and scalability.

What follows are recommendations that mostly come from my own professional experience.

These will be broken down into three areas:- Requirements capture Architecture Design patterns Design Implementation

Page 3: J2EE Performance And Scalability Bp

3

Introduction

The J2EE application server that I use is IBM WebSphere, hence the slight WebSphere flavour of the presentation.

Where WebSphere specific functionality has been mentioned, e.g. DynaCache, there will undoubtedly be something similar available with your own application server of choice.

Page 4: J2EE Performance And Scalability Bp

4

Introduction

Overlap There will be overlap in some parts of this presentation. For example, if you bring most of the data closer to the business

logic using a caching solution, this is an architectural decision. If you only cache data relating to a specific piece of functionality

on the application server, this is more of a design decision. The same goes for StAX, i.e. you can use this to parse XML

with or without the involvement of web services.

Page 5: J2EE Performance And Scalability Bp

5

Introduction

SOA and Web Services Due to the popularity of SOA, this presentation

includes web services best practices, but excludes REST on the grounds that:-

REST is a convention rather than a standard and therefore cannot guarantee inter operability.

Cannot provide reliable messaging in the same way that WS* over JMS does.

Is not as secure as WS*. May become similar to WS* if it ever facilitates

attachments.

Page 6: J2EE Performance And Scalability Bp

6

Feedback

If you see this presentation on www.slideshare.net or my blog, all constructive feedback is welcome.

Page 7: J2EE Performance And Scalability Bp

7

The Performance and Design Trade Off

The fastest way to do anything is by using the shortest code path possible, however, this may undermine code:- maintainability extensibility Reusability

However, producing the most elegant design possible may undermine performance and scalability.

Be pragmatic about achieving the most elegant design possible and the best performance and scalability possible simultaneously. Achieving both goals can require trade offs.

Page 8: J2EE Performance And Scalability Bp

8

“Picking The Low Hanging Fruit”

The final section focuses on performance features that can be used without any impact on the architecture, design or code.

The “low hanging fruit” to use tuning speak.

Page 9: J2EE Performance And Scalability Bp

9

Requirements Capture

Page 10: J2EE Performance And Scalability Bp

10

#1 Its Never Too Early To Think About Performance Capture performance requirements as early in the

project life cycle as possible, preferably at use case level where applicable.

Page 11: J2EE Performance And Scalability Bp

11

#2 Allow For Scalability In The Requirements Allow for software scalability, in terms of:-

Transaction volume growth User population growth Data growth

You want to avoid an application that performs adequately on day one of production and then deteriorates in performance from that point onwards.

Page 12: J2EE Performance And Scalability Bp

12

#3 Specify Resource Utilisation In The None Functional Requirements

Specify response, throughput and CPU utilisation when multiple batch processes and on-line usage is taking place at the same time, if applicable.

CPU utilisation at this early stage ?!? Yes, anyone can write code that abuses systems resources

through things such as excessive remote method calls. Or, anyone can write code that under utilises resources

through locking on singleton resources etc. What happens if multiple processes need to run in ‘catch-up’

scenarios, when a single process saturates CPU ?. Do what you can, this may be easier when a new application

is being built to replace a legacy application.

Page 13: J2EE Performance And Scalability Bp

13

#4 Specify The Target Environment For Performance Acceptance Testing Up Front

Its no good if the developers test their code on a sixteen core machine, when the production server only has four cores !!!.

Specify the performance testing acceptance environment as tightly as possible in terms of:- Software versions: operating systems, application

server and relational databases etc Hardware Topology Data set General configuration

Page 14: J2EE Performance And Scalability Bp

14

Architecture

Page 15: J2EE Performance And Scalability Bp

15

What Is Architecture ?

One of the most over loaded terms in the IT industry right now.

IEEE Standard 1471 defines this as:-“Architecture: the fundamental organization of a system embodied in its components, their relationships to each other and to the environment and the principles guiding its design and evolution”.

Some people define this as the characteristics of a system that are “hard to change”.

Page 16: J2EE Performance And Scalability Bp

16

What Is Design ?

Design focuses on how to deliver the functionality required in order to satisfy use cases within the constraints of the architecture.

Carnegie Melon University’s Software Engineering Institute have produced some interesting essays on this:- Defining The Terms Architecture, Design and Imple

mentation What Is The Difference Between Architecture And D

esign?

Page 17: J2EE Performance And Scalability Bp

17

#1 Avoid Distributed Object Architectures This is

Martin Fowler’s first law of distributed object architectures. These are an anti-pattern to scalability, though the remote

call overheads they incur, specifically around:- Network latency Network round trips Object serialisation

Prefer “Shared nothing” architectures for scaling as per “Lessons learned from failed projects” in this article.

Page 18: J2EE Performance And Scalability Bp

18

#1 Avoid Distributed Object Architectures Most application servers now have the ability for remote

method calls between beans within the same container to be turned into local calls.

If the application is symmetrically deployed across a cluster with workload management and remote method call optimisation, this might not be a complete disaster.

However, do not waste your time:- Architecting a distributed object architecture Architecting, designing and coding the service locator and

business delegate ‘plumbing’ for such a solution.

Page 19: J2EE Performance And Scalability Bp

19

#1 Avoid Distributed Object Architectures This practice also applies to service oriented architectures,

as per “Build a resilient SOA infrastructure” Parts 1 and 2, from part 1:-

“Established best practices and lessons learned from related technologies provide strong guidance for building a resilient SOA. For example, from Java™ 2 Platform, Enterprise Edition (J2EE) technology, the collocation of dependant Enterprise JavaBeans (EJB) components on the same application server allows for optimizations like pass by reference as opposed to pass by value, and also reduces the consumption of server resources, specifically the use of one application server worker thread in one server compared to N threads across N servers. It follows analogously in the context of SOA that the collocation of tightly-coupled services should yield similar benefits as those observed in J2EE.”

Page 20: J2EE Performance And Scalability Bp

20

#2 Be Aware Of The Performance Penalties That Tiers Incur Security conscious industries such as finance, prefer

architectures subdivided into tiers, each of which goes into it’s own DMZ.

Tiers can have the same performance overheads as distributed object architectures.

Performance penalties might be acceptable when there is minimal chatter between tiers, but not tolerable for ‘chatty’ applications. E.g. batch processes

‘Chatty-ness’ refers to conversational traffic between tiers.

Page 21: J2EE Performance And Scalability Bp

21

#3 Choose Clustering Solutions With “Prefer local resources” Optimisation The purpose of avoiding distributed object

architectures and tiered architectures is to avoid performance death via pass by copy overheads.

On the same theme, prefer clustering solutions that will try to direct method calls to beans deployed to the same container as those which the method calls originated from.

Page 22: J2EE Performance And Scalability Bp

22

#4 Iterative Development And Architecture Do Not Always Mix In recent times, iterative approaches to software

development, agile etc, have become very popular. The architecture should be “over arching”. Developing the architecture iteratively can lead to

problems. May be Ok if you know from the outset where you

are going with the architecture, or if the architecture is extremely simple and will remain so.

Page 23: J2EE Performance And Scalability Bp

23

#5 Avoid Tightly Coupled Components

“Loosely coupled and highly cohesive” feature prominently in the vocabulary of most architects and designers.

Vertical and horizontal layers should always be loosely coupled.

I have seen projects where the same vertical layers are always invoked together. Avoid this in order to achieve performance gains through code

path shortening and SQL statement consolidation. This is an obvious point, but one I consider well worth

reiterating.

Page 24: J2EE Performance And Scalability Bp

24

#6 Minimise Data Access Latency In The Architecture Minimise latency when accessing data by either:-

Moving some of the business logic closer to the database, i.e. use PL/SQL or Java stored procedures in the database.

Move the data closer to the business logic, i.e. use caching and / or a grid style caching solutions.

If the same data item is rarely used more than once during processing, move the processing closer to the database rather than using a caching solution.

Page 25: J2EE Performance And Scalability Bp

25

#6 Minimise Data Access Latency In The Architecture Some caching solutions are code invasive and need to

factored into the software at the architecture development stage.

There are caching products have APIs which minimise the impact of using the solution on the code, e.g.:- Gigaspaces have a JMS API WebSphere eXtreme scale has hooks into the Java

Persistence Architecture via annotations. Harnessing grid processing capabilities will probably require

usage of custom APIs. Check this out up front !!!.

Page 26: J2EE Performance And Scalability Bp

26

#6 Minimise Data Access Latency In The Architecture There are generally two types of caching solution:-

Object caches Memory resident relational databases, e.g. Oracle

TimesTen and IBM SolidDb. Choosing which one to use depends upon:-

Whether your application is expecting objects or relational data.

Some solutions, e.g. memory resident relational databases are easier to retrofit with minimal code changes than other solutions.

Page 27: J2EE Performance And Scalability Bp

27

#7 Include Caching In The Architecture This point overlaps with the last one somewhat, but

concerns more than just access to the core data. The fastest way to do anything is not to do it at all. In the context of a J2EE application server, cache

results, objects, servlets, java server pages so as to minimise processing. WebSphere dynamic cache (DynaCache) facilitates

the caching of all of these objects plus command classes and web services output.

Page 28: J2EE Performance And Scalability Bp

28

#7 Include Caching In The Architecture The layers of layered / tiered architectures 101:-

Presentation Service Business Logic Integration

This basic model should also include a caching layer in an ideal world.

I have seen software where beans used to access standing data from a database were the most active in the application until caching was used, !!! AVOID THIS !!!.

Page 29: J2EE Performance And Scalability Bp

29

Design Patterns

Page 30: J2EE Performance And Scalability Bp

30

What Is A Design Pattern ?

To quote the design patterns section of Sun’s blue prints for J2EE web site:-

“A pattern describes a proven solution to a recurring design problem, placing particular emphasis on the context and forces surrounding the problem, and the consequences and impact of the solution”.

Page 31: J2EE Performance And Scalability Bp

31

Why Use Design Patterns ?

Again, from the design patterns section of Sun’s blueprints for J2EE:-“They have been proven. Patterns reflect the experience,

knowledge and insights of developers who have successfully used these patterns in their own work.

They are reusable. Patterns provide a ready-made solution that can be adapted to different problems as necessary.

They are expressive. Patterns provide a common vocabulary of solutions that can express large solutions succinctly”

Page 32: J2EE Performance And Scalability Bp

32

#1 Session Façade Pattern

Use stateless session beans with coarse grained interface as the API to the business logic.

This will allow clients to be serviced with a minimal amount of network round trips and remote method calls.

A core J2EE design pattern as detailed here.

Page 33: J2EE Performance And Scalability Bp

33

#2 Service Locator Pattern

A core J2EE design pattern. Allows calls to EJBs and JMS components to be

encapsulated into one place. This pattern lends itself to caching of EJB local and

remote home interfaces, topics, queues, etc . . ., thus minimising JNDI lookups.

Page 34: J2EE Performance And Scalability Bp

34

#3 Data Transfer Object Pattern

This is a core J2EE design pattern as per this description.

Design the service layer API so that data can be passed to clients as serial-isable objects.

Helps to avoid multiple get method calls and performance loss due to network latency, round trips.

Page 35: J2EE Performance And Scalability Bp

35

#4 Proxy Service Pattern

Consider the proxy service pattern for ‘parallelising’ batch oriented work loads.

This essentially involves a ‘service’ that partitions the workload and then distributes it amongst worker threads or beans.

This is used to parallelise batch workloads with WebSphere Compute grid.

Page 36: J2EE Performance And Scalability Bp

36

#5 The Business Delegate Pattern

This de-couples the business logic from the presentation layer.

It also aims to minimise the number of network round trips between the presentation and business logic tiers.

Further details of this pattern can be found here.

Page 37: J2EE Performance And Scalability Bp

37

Design

Page 38: J2EE Performance And Scalability Bp

38

#1 Do Not Abuse The Database

If using raw JDBC and SQL/J make the code bind friendly. Use batching APIs where possible, e.g. the JDBC batching

API. Set the pre fetch size to minimise network round trips for

result set retrieval. Acquire connections and statement handles late and

release them early. Do not hammer the database with repeated calls for the

same standing data, cache it in the application server.

Page 39: J2EE Performance And Scalability Bp

39

#2 Respect The “Holy Trinity Of Database Performance” This is really a follow on from the last point,

however, its importance means that it warrants a mention in it’s own right.

The “Holy trinity” consists of:- Connection management Cursor management ‘Good’ schema design

Page 40: J2EE Performance And Scalability Bp

40

#2 Respect The “Holy Trinity Of Database Performance” Connection management

Always use JDBC connection pooling. No matter what persistence method you use, you

will usually end up using JDBC under the covers of your ORM somewhere.

Set the min and max connections on the pool to be the same.

A rapid increase in the connection rate can cause a bad phenomenon called “Connection storms”.

Acquire connection handles late and release them early.

Page 41: J2EE Performance And Scalability Bp

41

#2 Respect The “Holy Trinity Of Database Performance” Cursor management

Make SQL statements bind friendly. Try and reuse statement objects where possible:-

Create a statement outside of a loop, bind to it and execute within the loop.

Watch out for hard parses in the database and statements with large version counts.

For OLTP style applications, there should be few (if any) statements which have the same SQL text but more than one execution plan.

Page 42: J2EE Performance And Scalability Bp

42

#2 Respect The “Holy Trinity Of Database Performance” Good Schema Design

Avoid tables with one to one relationships, such tables should be consolidated.

Balance normalisation against performance. Normalisation means the application only has to

maintain the same data in the same place within the database.

However, some performance requirements may require some de-normalisation of the schema design.

Consider the joins required by performance critical queries and the join paths the schema enforces in order for these queries to be executed.

Page 43: J2EE Performance And Scalability Bp

43

#3 Eliminate Bottlenecks, Don’t Replicate Them Only scale out after all reasonable design and

coding efforts have been expended to achieve the desired levels of response time and / or throughput.

Eliminate bottlenecks, do not replicate them !!! This rule is ‘borrowed’ from

Designing and Coding For Scalability in WebSphere Application Server

Page 44: J2EE Performance And Scalability Bp

44

#4 Make Logging Infrastructures Fine Grained

Build flexibility into the way in which logging frameworks are used, such as log4j.

Avoid having to turn on debug logging across the entire application, by having the ability to increase the logging level around areas of interest.

The Java Utility Logging framework is less flexible than log4j, but allows logging levels to be specified at class level via the WebSphere administration console.

The Apache Commons logging framework allows different logging implementations to be easily swapped in and out of you application. Details of how to do this can be found here.

Page 45: J2EE Performance And Scalability Bp

45

#4 Make Logging Infrastructures Fine Grained Usually relational databases are I/O bound and

J2EE application servers are CPU bound. The rigid and none flexible use of logging

frameworks can make your J2EE application I/O bound !!!.

Page 46: J2EE Performance And Scalability Bp

46

#5 Carry Out Processing Closest To The Resource That Requires It If the application if data modification intensive, consider

moving this type of processing closer to persistence layer (database):- Consider stored procedures. Some databases allow for Java stored procedures, thus

allowing the skills of the J2EE developers to be leveraged closer to the database.

For processing involving the validation of data against static rules, cache the data relating to the ‘rules’ and perform the validation in the application server.

Page 47: J2EE Performance And Scalability Bp

47

#6 Avoid ‘Chatty’ Designs

When dealing with integration end points that require ‘Conversations’ and hand shakes, try to consolidate calls to such places in order to minimise ‘Chatter’.

Otherwise, you can end up spending more time on the network than in performing productive work.

Leverage features for ‘bundling’ calls to integration and persistence end points together.

Page 48: J2EE Performance And Scalability Bp

48

#7 Do Not Reinvent The Wheel

All J2EE application server vendors have gone to great lengths in honing the performance and scalability of their offering.

Use the fruits of these vendors efforts. Replicating functionality in the application server, is

unlikely to result in the same ‘performant’ results achieved using vendor provided functionality.

Page 49: J2EE Performance And Scalability Bp

49

#8 Be Pragmatic Before Designing For Database Independence Ask how likely it is that your software will need to support

databases from different vendors:- Likely if you are producing and selling software. Less likely if the software is for an ‘in-house’ project.

There are great performance features in each vendors database offering, the benefits of which will never be realised with the “Take advantage of nothing” approach. The Oracle JDBC array interface is but one example of this.

If you need to use vendor specific features, encapsulate the use of these into as few places in the design and code as possible.

Page 50: J2EE Performance And Scalability Bp

50

#9 Be Pragmatic Before Designing For Application Server Independence The whole ethos behind Java and J2EE is write once

and then run anywhere !?!? However, consider:-

Scripting, administration tools and the application server management infrastructure may be vendor specific.

Do not go outside the J2EE specification on a whim, however, using vendor specific performance features can save considerable time and money.

In reality, how likely are you to deploy your code to different vendors application servers ?.

Page 51: J2EE Performance And Scalability Bp

51

#10 Make Designs Cluster Friendly Leverage cluster friendly features such as the distributed map in

WebSphere. Prefer designs that allow workloads to distributed amongst cluster

nodes. i.e. in the case of WebSphere network deployment, allow workloads

to be distributed via the workload manager. Avoid designs where multiple beans contend for the singleton

resources. Prefer stateless session beans to state-full session beans as calls to

stateless beans can be load balanced across a cluster. !!! Not all J2EE resources can be shared across a cluster, e.g.

timer beans and file resources. !!!

Page 52: J2EE Performance And Scalability Bp

52

#11 Seriously Consider JPA For Your ORM Requirements Due to short comings with entity beans, many object

relational mapping tools and frameworks have emerged, the most popular of which is hibernate.

These short comings have been addressed in JEE5 (EJB 3.0) with the Java Persistence Architecture.

Many vendors have put a great deal of effort into honing the performance of JPA, e.g.: EJB 3.0 Performance Improvements in WAS v7.0. Therefore, give serious consideration to JPA for your ORM

requirements. If you need to take advantage of performance features such

as the Oracle array interface, there is absolutely no reason why you cannot mix raw JDBC with JPA.

Page 53: J2EE Performance And Scalability Bp

53

#11 Seriously Consider JPA For Your ORM Requirements JPA includes the following performance features:-

A caching framework SQL statement batching Ability to write queries in conventional SQL ObjectGrid integration by additional annotations

However, you will not get the grid processing capabilities of ObjectGrid by doing this.

DB2 static SQL access support Hibernate entity provides a JPA API, as this requires a JEE

5 application server, it is not clear why you would not want to use native JPA with JEE5.

Page 54: J2EE Performance And Scalability Bp

54

#11 Seriously Consider JPA For Your ORM Requirements JPA is part of the JEE standard, it is therefore likely

to be subject to more performance honing by J2EE application server vendors than hibernate.

In the context of Oracle the only practical benefit in using raw JDBC over JPA is the ability to use the Oracle array interface.

Refer to the WebSphere and Java Persistence blog for more information on WebSphere and JPA.

Page 55: J2EE Performance And Scalability Bp

55

#12 Prefer JMS Over RMI For Mass Asynchronous Style Communication When using remote devices or sites to communicate

with a central J2EE application server, prefer JMS over RMI, especially if the communication style is asynchronous.

RMI requires a separate Java thread per connection and therefore has inherent scalability limits built in.

If a send and retry mechanism needs to be written using RMI, this is unlikely to be as scalable or as robust as what is available out of the box with JMS.

Page 56: J2EE Performance And Scalability Bp

56

#13 Use The Most Appropriate XML Parsing API StAX (STreaming API for XML) came about to address short comings

with SAX (Simple API for XML) and DOM (Domain Object Model). DOM reads entire XML documents into memory and parses them into a

tree. May give acceptable performance for simple documents the entire

contents of which are required by the program. StAX, a streaming pull parser has:-

lower memory requirements than DOM both in terms of smaller libraries and not having to read entire XML documents at a time.

Allows multiple documents to be read using a single application thread.

You only parse what you require from the document, i.e. no overhead in parsing XML that you might never use.

Refer to this article from Sun for more information on StAX versus DOM.

Page 57: J2EE Performance And Scalability Bp

57

#14 Use The Most Efficient Web Services Engine Available The overheads in parsing XML and marshalling objects to

XML (and back) can be significant. The two most popular ways for ‘rendering’ web services are

to use Apache Axis or the native application server platform. Use the method that gives you the greatest performance. For example, massive strides have been made in improving

web services performance in WebSphere 7.0 which go beyond the addressing of marshalling overheads. Refer to this article.

Page 58: J2EE Performance And Scalability Bp

58

#15 Prefer JAX-WS Over JAX-RPC

JAX-WS implementations are generally faster than their JAX-RPC counterparts, this is due in part to JAX-WS using StAX.

Refer to this Sun article for further information on JAX-WS versus JAX-RPC performance.

Page 59: J2EE Performance And Scalability Bp

59

#16 Use Coarse Grained Interfaces For Web Services Use coarse grained interfaces for web services,

this will result in:- Improved web service reusability. Web services requests can be satisfied in fewer

requests by the consumer, hence less network usage, round trips etc . . .

Page 60: J2EE Performance And Scalability Bp

60

#17 Avoid Sparse SOAP Documents Avoid documents that are ‘Sparse’, i.e. complex in

structure and low in data content. Such documents will involve a high parsing

overhead with little benefit in terms meaningful data extraction.

Page 61: J2EE Performance And Scalability Bp

61

#18 Be Mindful Of The Performance Overheads Of WS-Security From “Best Practices For Web Services: Part 9”:-

“It's probably safe to say that enabling security through WS-Security technologies is at least twice the cost of proving similar capabilities using traditional SSL with HTTP”

Page 62: J2EE Performance And Scalability Bp

62

Implementation

Page 63: J2EE Performance And Scalability Bp

63

#1 Leverage None Code Intrusive Application Server Performance Features

Leverage application server performance features that require no code changes.

In WebSphere some of these include:- Object Request Broker pass by reference. The cache servlets ‘switch’ on the web container. DynaCache.

Page 64: J2EE Performance And Scalability Bp

64

#2 Take Advantage Of The Oracle Client Side Cache Oracle 11g introduces a client side result cache. This requires the thick JDBC driver. Result sets are cached on the application server host

transparently to the application. On a benchmark performed by an Oracle, this results in:-

Up to 6.5 times less CPU usage on the database server host 15-22% response time improvement 7% improvement in mid tier CPU usage

The client result cache is detailed in the Oracle document here.

Page 65: J2EE Performance And Scalability Bp

65

#3 Always Use The Latest Version Of The Apache Xerces XML Parser When using the Apache Xerces XML parser, use

the latest version available for the best possible performance.

Page 66: J2EE Performance And Scalability Bp

66

Useful Resources

Design and architecture resources Carnegie Melon SEI Essays On Software Architectur

e Sun BluePrints > J2EE Design Patterns “Errant Architectures” by Martin Fowler Martin Fowler’s ‘bliki’

Page 67: J2EE Performance And Scalability Bp

67

Useful Resources

IBM resources Designing and Coding Applications For Performance

and Scalability in WebSphere Application Server WebSphere Application Server Best Practices for Pe

rformance and Scalability WebSphere Community Blog Developerworks

Best Practices For Web Services Part 9 Developerworks

Best Practices For Web Services Part 10

Page 68: J2EE Performance And Scalability Bp

68

Useful Resources

IBM resources WebSphere Compute grid WebSphere Dynamic Cache: Improving J2EE

Application Performance Using logging and tracing in Websphere Commerce

custom code WebSphere Network Deployment DistributedMap EJB 3.0 Performance Improvements In WAS 7.0 Web Service Improvements in WAS 7.0

Page 69: J2EE Performance And Scalability Bp

69

Useful Resources

IBM resources WebSphere Persistence Blog Build a resilient SOA infrastructure, Part 1: Why

blocking application server threads can lead to brittle SOA

Build a resilient SOA infrastructure, Part 2: Short-term solutions for issues involving tightly coupled SOA components

Page 70: J2EE Performance And Scalability Bp

70

Useful Resources

Sun Resources Scaling Your J2EE Applications Part 1 Scaling Your J2EE Applications Part 2 Java Tuning White Paper J2SE and J2EE Performance Best Practices, Tips A

nd Techniques Why StAX ? Implementing High Performance Web Services

Using JAX-WS 2.0

Page 71: J2EE Performance And Scalability Bp

71

Useful Resources

Oracle Resources Upscaling Your Database Application Performance:

The Array Interface 360 Degree Blog Post On The Client Results Cache Oracle Documentation On Using The 11g Client

Results Cache