dm perftuning1 pdf

Upload: omnibuyer

Post on 15-Oct-2015

8 views

Category:

Documents


1 download

DESCRIPTION

Dm perf tuning1 PDF

TRANSCRIPT

  • Copyright IBM Corporation 2010 TrademarksMonitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 1 of 23

    Monitoring and tuning InfoSphere Master DataManagement Server, Part 1: Set goals and tune eachlayer of the infrastructureTune the MDM Server environment for high performance access tomaster data

    Jesse Chen ([email protected])Senior Performance EngineerIBM

    Yongli An ([email protected])MDM Performance ManagerIBM

    Adam Muise ([email protected])IBM InfoSphere EvangelistIBM

    22 October 2010

    This article series provides guidelines on how to effectively monitor and properly tune eachlayer of the infrastructure in IBM InfoSphere Master Data Management Server by focusing onan IBM "Blue stack." The blue stack includes MDM Server, WebSphere Application Server,DB2 on AIX, and AIX operating-system-level resources. For each layer, the article discussesa list of key parameters and variables that affect performance. It also offers detailed instructionson how to monitor, and it provides general recommendations on how to tune each layer. Part 2of the series includes practical examples of monitoring MDM Server, along with a summary ofall the recommended tools for your reference.

    View more content in this series

    IntroductionApplication performance of IBM InfoSphere Master Data Management Server (MDM Server)is influenced by a set of factors that involve solution design, implementation, and infrastructure.This series of two articles focuses primarily on the implementation and infrastructure factors thatimpact performance, as well as the methodology for monitoring the MDM Server application.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 2 of 23

    You need to closely monitor and effectively tune the full MDM Server implementation stack foroptimum performance. This series helps IT professionals and customers running IBM MDM Serverto maximize performance by offering practical instructions and recommendations to effectivelymonitor and tune the full MDM Server software stack.

    MDM Server helps your organization gain control over business information by enabling you tomanage and maintain a complete and accurate view of customers' master data. MDM Serverenables your organization to obtain the maximum value from the master data by centralizingmultiple data domains (including party, account, and product) and by providing a comprehensiveset of pre-built business services that support a full range of master data management (MDM)functionality. IBM is the first to deliver the MDM server for multiple domains. IBM also deliversindustry-leading integration capabilities that reduce client risk and cost, thereby increasing revenueand enabling clients to quickly meet compliance requirements.

    The MDM Server supports multiple operating systems, middleware, and databases. Like in anysoftware application, bottlenecks in individual hardware subsystems, an incorrectly configuredoperating system, or a poorly tuned application and environment can cause poor performance.The proper tools can help you to diagnose these bottlenecks.

    This article walks you through the major components of the MDM Server and how the componentsinteract. You will learn the various software layers of a service and how the MDM Server respondsonce a service is invoked. This article prepares you to monitor, identify, and tune various layers ofthe MDM Server for optimum performance.

    This article provides best practices in performance tuning for the following MDM Server-specifictopics:

    MDM Server monitoring and tuning WebSphere monitoring and tuning Database monitoring and tuning

    It is recommended that you read this article through in order, however you might find somesections to be useful as immediate references.

    PrerequisitesFor optimum performance, the components, including the MDM server, the database server, andthe WebSphere MQ server, should reside on separate systems in most cases. The primary benefitof having such a configuration is to avoid resource contention from multiple components residingon a single server. For example, running the MDM Server components on a single computer,though technically feasible, would impact the amount of throughput achieved, depending on yourworkload.

    The components for the IBM InfoSphere MDM Version 8 used in the performance testconfigurations in this article are as follows:

    An application server computer running WebSphere Application Server V6.1

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 3 of 23

    A database server computer running DB2 V9.5.4

    Understanding the big picture

    When approaching the subject of application performance in a multi-tier environment, a healthycombination of process competence, product knowledge, and contingent experience with yourenvironment is absolutely essential. Performance tuning checklists, best practices, and in-depthexperience with a product are great assets, but they only go so far. To begin, set performancetargets that align with your business goals, and proceed to reach them with approaches thatbalance long-term scalability with low costs in time and resources.

    This section outlines the competencies and processes that need to be in place to successfully planfor, analyze, and tune an MDM Server implementation.

    Set performance goals

    When embarking on any performance tuning or testing project, it is best to set concrete goals thatprovide focus and an end-state to your efforts. While this might seem obvious, many tuning effortshave become needlessly expensive because focus was placed on the wrong component or thetuning was simply taken too far because a quantifiable goal had not been set. Clearly setting sucha goal requires an understanding of the specific business requirements.

    Your goals should be specific and rooted in the business language. For example, rather thanstating a target of 30 TPS for a delta load batch run, it is often better to specify the target time fora batch window including all start-up and follow-up work along with a tolerance level for rejectedrecords. This allows your tuning to be flexible and cost-efficient with the solution while it addressesthe real business concern rather than an abstract number.

    The language and terminology of your goals should be clear to everyone involved in achieving it.Even a commonly used term like transaction is incredibly ambiguous in a workload with differenttypes of services.

    Following is a list of sample concrete goals:

    Searches by Name and Postal Code should not exceed 1500ms and should be less than1000ms for 95% of the executions

    Online getParty calls should not exceed 500ms for 95% of the executions at a sustainconcurrency of 45 getParty calls per second

    Delta update-maintenance transactions should complete within 3 hours assuming a maximumbatch workload of 65,000 updates

    Know the environment

    You need a solid understanding of the components of your MDM Server solution to helpyou streamline any performance testing or tuning process. MDM can only run as fast as theinfrastructure allows. Any CPU, I/O, or network bottlenecks can negatively impact throughput.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 4 of 23

    Correlating infrastructure benchmarks and basic monitoring gives your performance test resultsenough context to help you pinpoint your bottleneck.

    A multi-tier enterprise solution has a lot of related moving parts, and MDM server is no exception.Understanding the concurrency, response time, and resource usage at each node in theenvironment helps provide faster isolation of system and application bottlenecks. Figure 1 shows alogical architecture of the common components in an MDM Server solution.

    Figure 1. MDM Server logical architecture

    (View a larger version of Figure 1.)

    Know the workloadUnderstanding and quantifying the workloads in your MDM Server solution enables you to identifyyour testing approach and problem analysis. In any given MDM Server solution, you have differentworkloads that can stress different aspects of your environment. Following are some commonworkloads:

    Initial loadPutting data into MDM Server for the first time from the source systems

    Delta loadDiscrete updates from the source systems after the initial load

    Application-specific workloadsService mixes specific to the business scenario of each client system in real-time

    Analysis or export workloadsETL or custom exports

    Suspect duplicate processingFor those SDP invocations with an asynchronous/evergreen process

    Beyond isolating the workload, it is important to understand the different mix of MDM servicesbeing called within each workload. Because of the nature of each workload, different parts ofthe MDM Server infrastructure are stressed. For update-heavy delta loads, there might be I/Olimitations on the database server. For a highly concurrent application-specific workload with a lot

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 5 of 23

    of queries, the WebSphere thread pools and database connection pools might need to be altered.You can break down existing workloads by employing the tools outlined in this article.

    Establishing a performance tuning processWhether your intention is to tune MDM Server to a desired service level agreement (SLA) or just toperform a test on the custom code change, establishing a documented and repeatable process isessential. Consider the following when conducting the MDM Server performance testing:

    Each performance test needs a consistent baseline with which to compare changes. Use asingle and repeatable data set for your performance test, and ensure that you always restoreyour MDM Server database to the point at which you established your baseline.

    Make one change at a time, and re-establish your baseline when you have confirmed that thechange will become permanent.

    Ensure that both the application and database layers are restored back to the most recentbaseline state, which is the best practices starting point plus any accepted tuning changes.

    An MDM Server solution can have many concurrent workloads that use different entry pointsinto the server. If you have a combination of JMS/MQ and EJB/RMI workloads, it is best toisolate them first, then tune, and then combine for a final test. In the same manner, distinctlydifferent workloads, such as operational queries and delta load updates, should be isolatedand tested separately first.

    Monitoring and tuning MDM ServerMDM Server has a comprehensive solution for monitoring response times at various levels ofgranularity. Performance tracking can be enabled or disabled as required and can be configuredfor low-impact production environment monitoring to more intensive profiling investigations.While the feature is ARM 4.0 compliant (see Resources), this article focuses on log4j output(performancemonitor.log), because it is available to all MDM Server implementations without theneed for additional software.

    Beginning in MDM Version 8.0, the MDM performance tracking feature has been greatly enhancedfor ease of use and greater precision in monitoring response times. The recorded response timemetrics are now measured in nanoseconds instead of milliseconds in earlier versions. In addition,the logging output now more clearly groups MDM subcomponent calls into their parent controller,which allows for a log that is easier to understand and parse. This feature has an understood andmanageable overhead in MDM, allowing it to be used in production for problem resolution withoutseriously impeding the system throughput.

    To learn more about MDM Server performance tracking, consult the MDM Developer's Guideincluded in the MDM Server product documentation (see Resources).

    Enabling MDM Server performance trackingTo make use of MDM Server performance tracking, you must explicitly enable it. Like most productfeatures in the MDM Server, you do this by modifying the configuration and management tables,specifically the CONFIGELEMENT table. Enable MDM performance tracking at the recommendedlevel for problem diagnosis and tuning. Listing 1 shows the command for a level 2.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 6 of 23

    Listing 1. Database commands to enable performance monitoringUPDATE CONFIGELEMENT SET VALUE = 'true', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERENAME = '/IBM/DWLCommonServices/PerformanceTracking/enabled';UPDATE CONFIGELEMENT SET VALUE = '2', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERENAME = '/IBM/DWLCommonServices/PerformanceTracking/level';COMMIT;

    Because using log4j output as a starting point for problem investigation is recommended,it helps to configure log4j optimally to get the right size and roll-over window for theperformancemonitor.log. To ensure proper capture of an elusive problem or to simply get a largestatistical sample, dedicate about 2 GB to the rolling performancemonitor.log. This is typicallyenough to capture an hour of workload on an MDM Server in complicated and high-volumeimplementations. Find the entries from Listing 2 in the log4j.properties file in the MDM Serverproperties.jar archive.

    Listing 2. Configure Log4J Properties to Ensure Rotating of PM Logs...log4j.appender.performanceLog=org.apache.log4j.RollingFileAppenderlog4j.appender.performanceLog.Encoding=UTF-8log4j.appender.performanceLog.Threshold=ALLlog4j.appender.performanceLog.layout.ConversionPattern=%d : %m%nlog4j.appender.performanceLog.layout=org.apache.log4j.PatternLayoutlog4j.appender.performanceLog.File=/opt/IBM/MDM85/MDMlogs/perflogs/performancemonitor.loglog4j.logger.com.dwl.base.performance.internal.PerformanceMonitorLog=ALL,performanceLoglog4j.appender.performanceLog.MaxBackupIndex=20log4j.appender.performanceLog.MaxFileSize=100MB...

    After both the CONFIGELEMENT and log4j changes, a WebSphere Application Server restart forthe MDM Server instance is required to enable the changes. Once you recycle the server, anytransactions made to that MDM Server are logged by MDM Server performance tracking. It is alsoworth noting that you can change the level of performance logging. For example, you can turn it onor off dynamically using the CM console without the need to recycle the MDM Server instance.

    The performancemonitor.log code in Listing 3 shows an example of a relatively simple customizedcomposite transaction. Listing 3 has one parent transaction: maintainParty, yet it calls manyMDM Server sub-components. From the last number in the CONTROLLER, you can see that thistransaction took 667095210 nanoseconds (highlighted), or approximately 667 milliseconds. Themajority of this transaction has been omitted, because there can be tens to hundreds of lines pertransaction.

    Listing 3. Sample entries in performance monitoring logs2009-02-12 21:02:45,028 :706000020 maintainABCParty : addPerson_CONTROLLER @ CONTROLLER : : 147123449056435909 : 667095210 : : SUCCESS706000020 maintainABCParty : externalValidation(int, Object, String) @ DWLControl : 147123449056435909 : 241234490564359621 : 75237 : Completed : SUCCESS706000020 maintainABCParty : externalValidation(int, Object, String) @ TCRMPersonBObj : 147123449056435909 : 518123449056436088 : 64034 : Completed : SUCCESS706000020 maintainABCParty : internalValidation(int, Object, String) @ TCRMPersonBObj : 147123449056435909 : 371234490564360022 : 1457512 : Completed : SUCCESS706000020 maintainABCParty : searchSuspectDuplicateParties_COMPONENT @ COMPONENT : 147123449056435909 : 320123449056436177 : 341390435 : : SUCCESS

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 7 of 23

    706000020 maintainABCParty : searchPersonByName_COMPONENT @ COMPONENT : 320123449056436177 : 602123449056436296 : 339744666 : : SUCCESS706000020 maintainABCParty : addParty_COMPONENT @ COMPONENT : 147123449056435909 : 587123449056470310 : 322874434 : : SUCCESS...706000020 maintainABCParty : addPartyAdminSysKey_COMPONENT @ COMPONENT : 746123449056470366 : 950123449056501339 : 11416919 : : SUCCESS706000020 maintainABCParty : internalValidation(int, Object, String) @ TCRMAdminContEquivBObj : 950123449056501339 : 706123449056501410 : 60410 : Completed : SUCCESS706000020 maintainABCParty : execute(ExtensionParameters) @ FeedInitiator : 147123449056435909 : 324123449056502600 : 32071 : : SUCCESS

    Profiling and problem resolutionWhile MDM Server performance tracking has many uses, the most important is its role in profilingMDM Server workloads or individual transactions. It provides two pieces of key information:granulated response times and the sequenced breakdown of MDM Server component calls. Thisinformation makes performance tracking invaluable as a first-stop diagnostic tool for performanceproblem diagnosis. As such, IBM Support considers it one of the critical must-gather items for anysupport engagement. The IBM MDM Performance Team also uses MDM performance trackinginternally to benchmark and assess the MDM Server performance.

    A typical usage scenario of MDM Server performance tracking is to investigate slow response froma complicated, customized MDM Server transaction. Capturing the transaction of interest in a testenvironment reveals the following about the subcomponents:

    Their names The frequency of their execution Their sequence Their response times

    With this information, you can see what large and complicated transactions are doing in theimplementation, and potentially you can spot areas for improvements. If used early in thedevelopment process, you can use this information to spot inefficient MDM server inquiry levels inwhich more component information is being brought back than the transaction actually needs orto spot inefficient logic in customized composite transactions that result in redundant MDM Servercomponent calls. The transaction breakdown can also pinpoint subcomponents that are takingan unusually long time, helping to pinpoint where to begin further investigation for SQL or codeslowdowns.

    Another typical usage scenario for MDM Server performance tracking is to profile the overallworkload for the purposes of tuning and transaction optimization. A simple shell or PERL script canhelp produce helpful summary data about which MDM components are taking up the most time inan operational MDM workload.

    The chart in Listing 4 comes from an early performance test in an MDM implementation. Thechart provides a high-level perspective on transaction mix and response time in the monitoredMDM server. The implementation was experiencing performance problems with poorly tunedsuspect duplicate processing rules. By diving into examples of the updateParty, addPerson,and addOrganization transactions in the performance logs, it was easy to determine that the

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 8 of 23

    subcomponent common to all of them, searchSuspectDuplicateParties, was taking the most timein all of the aforementioned controller-level transactions. A customized search suspect rule wasmodified and re-tested, which cut the transaction response times in half.

    Listing 4. Transaction response time (ms)Transaction Name Txn Count AvgRespTime CumRespTime %WorkloadupdateParty 17911 3186 57072557 37.22%addPerson 4850 7077 34331324 22.39%addOrganization 3353 8856 29702874 19.37%getPerson 9629 1105 10637676 6.94%updateContractPartyRole 20746 447 9280139 6.05%getOrganization 8291 889 737984 4.81%getPartyRelationship 12299 108 1323786 0.86%

    Configuring logging levelsThe MDM Server logging configuration is one of the most obvious and easiest ways to optimizeapplication performance. Frequently during development, the MDM Server Customer.log is set tolog4j's DEBUG level for tracing purposes. While this is fine for most testing environments, MDMServer's logging output is substantial, and this level is not appropriate for a production workload orfor performance testing. Therefore, the central logger should be set to ERROR, as shown in Listing5.

    Listing 5. Setting logging level...log4j.appender.file=org.apache.log4j.RollingFileAppenderlog4j.appender.file.Encoding=UTF-8log4j.appender.file.Threshold=ERRORlog4j.appender.file.layout.ConversionPattern=%d %-5p %3x - %m%nlog4j.appender.file.layout=org.apache.log4j.PatternLayoutlog4j.appender.file.File=/appl/spool/tssi/BP/logs/Customer.log# optional settingslog4j.appender.file.MaxBackupIndex=10log4j.appender.file.MaxFileSize=10MB

    # The next line controls the level of output for the root logger# [ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF]log4j.rootLogger=ERROR, file, stdout...

    It is also worth reviewing any extra diagnostic logging that might have been added for customizedtransactions or extensions. The best practice is to log this extra diagnostic information to theDEBUG level in the Customer.log rather than any other log or logging level. Frequently Javadevelopers write large portions of data (such as XML) to System.out, and this goes unseen untilit is deployed to production and the application server logs are filled with debugging information.Not only is this bad for performance, but it also hinders problem resolution by filling up the log withsuperfluous information.

    Using TAILTransaction audit information logging (TAIL) records the information about transactions executedin MDM Server. In order for the service to provide this information on demand, any MDM servicesthat add or modify data will have an additional overhead when TAIL is enabled. While this featureis helpful for adhering to accounting practices and standards, it should be given due diligence in

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 9 of 23

    capacity planning and performance testing. Consider the following points regarding performancewhen you enable the TAIL:

    You can configure the transactions that must be audited and the actions (or internaltransactions) within each transaction that must be audited. Therefore, configure only thetransactions and actions you require. Before MDM Server Version 9.0, at least one action pertransaction must be configured, however beginning with MDM Server Version 9.0, you havethe capability to configure only transactions without any actions.

    The percentage of MDM services that retrieve data from the TAIL tables are often low in atransactional workload, and those services are not typically a performance concern. The onlyconcern is any MDM transactions that modify or add data, because they will trigger TAIL.

    TAIL increases usage on the following MDM Server database tables: TRANSACTIONLOG,INTERNALLOG, INTERNALLOGTXNKEY, INTERNALTXNKEY, EXTERNALLOGTXNKEY(version 9.0), and EXTERNALTXNKEY (Version 9.0).

    If your main objective for using TAIL is to record system usage (for reporting purposes),consider using the service activity monitor instead.

    Using inquiry levelsKey inquiry services that support inquiry levels include GetParty, GetContract,GetPartyWithContracts, and GetProductInstance. An inquiry level dictates what type of businessobjects to return in the response, and it enables you to adapt the response to the needs of theconsumer and to be performance-sensitive by retrieving only the consumer needs. A set numberof inquiry levels for the various business objects are provided out of the box. A mechanism isprovided for you to define your own custom inquiry levels by configuring metadata.

    As of MDM Server 9.0.1, you have the capability to define your own SELECT statementassociated to an inquiry level. When doing so, the inquiry services use it as opposed to the out-of-the-box internal methods. The number of roundtrips to the database to fetch the data can begreatly reduced. The administration services for creating new inquiry levels provides the optionof generating an associated SELECT statement that is then managed within the metadata. TheSELECT statement can be tailored based on your needs, for example:

    By default the SELECT statement includes selecting all columns from all required tables.Columns you are not using as part of the implementation or columns not required for aparticular inquiry level can be removed from the statement.

    The WHERE clause can be tailored based on your user profile. For example, if yourimplementation has a strict rule about managing only one name per person, you can tailor thedefault WHERE clause to more efficiently join to the PERSONNAME table.

    The SELECT statement can get more granular to retrieve only the data you need. Forexample, if a consumer requires only Home Addresses as part of a query, the WHERE clauseof the particular inquiry level can include a discriminator that selects only address usage typesof home address.

    Using smart inquiriesSmart inquiries is an MDM Server feature that enables an implementer to turn off particularsets of MDM data model queries that are not in use. This simple change avoids the execution

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 10 of 23

    of extraneous queries, significantly reducing transaction response time and increasing TPS inworkloads in almost all transaction workload scenarios. Because this feature can modify thefunctionality of the product, and because your use of the MDM Server data model can evolve overtime, use careful consideration when implementing this feature. Consider the following points forusing smart inquiries:

    Because most MDM services include embedded inquiries, this feature often has benefits inmany MDM services.

    The performance benefit is contingent on how many of your transactions are querying theunused portions of the MDM Server data model, how often they are queried, and how manyunused portions you turn off.

    This feature reduces the response time and increases the throughput in those affectedtransactions.

    Optimizations typically result in a significant savings in the CPU usage on the applicationand database layers, as well as an overall load reduction on MDM database server, that isproportional to the number of queries eliminated from transactions by enabling this feature.

    Smart inquiries settings are not in effect when the pluggable SELECT statement is used withthe customizable inquiry level feature.

    Using historyMDM Server can track modifications to data using the history feature. This feature uses databasetriggers to modify MDM Server's history tables, which are essentially mirrors of the main MDMdata tables used for storing the relevant history events. The history tables are queried when inquiryservices such as GetParty are used with an as-of date in the DWLControl header (referred to aspoint-in-time queries). For most MDM Server operational workloads, the queries to the historytables make up a very small portion of the overall workload and require little consideration forperformance.

    Each transaction that modifies data in MDM Server runs the relevant triggers as part of the add orupdate work. Therefore, in heavy insert and update workloads, the history tables also should beconsidered along with the core operational tables in database tuning and I/O planning. Consider, ifpossible, dropping the database triggers for history tables that are not required for audit purposes.If you do this, the point-in-time queries for history tables with dropped database triggers are notsupported because there will be no data in those tables for the inquiry services to retrieve.

    Note that the growth of the history table should be planned for when doing initial capacity planning.One strategy that can be employed when using DB2 on z/OS is to partition the history tables bydate (range). When a partition has reached its required age based on audit requirements, it canthen be dropped. Range partitioning is also available when using DB2 for Linux, UNIX, andWindows database software. Use range partitioning to enable rapid deletion of ranges of data(called roll-out). For example, if you need to roll-out data by month, range partitioning by month isa reasonable strategy.

    Types of data extensionsThere are two supported approaches to extending out-of-the-box tables with additional attributes.The first approach is by creating a side-table extension that hosts the extended attributes with a

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 11 of 23

    one-to-one relationship back to the extended table. The second approach is an in-line extensionby altering the out-of-the-box table. If you have a high volume of add or update processing(also known as delta processing), consider in-line extensions. The MDM workbench componentprovides tooling to create the pluggable persistence routines so that only single inserts andupdates are made to those tables for inserting or updating a business object. See the MDM ServerDeveloper's Guide (see Resources) for more information on this approach.

    Using in-line extensions with pluggable persistence is a foundational performance consideration,because it has positive impacts on other activities, such as reducing trigger activity to the historytables and fewer tables to join for querying data when the pluggable SELECT statementsassociated to inquiry levels are used.

    MaintainParty composite serviceIt is common to employ a MaintainParty composite service to synchronize data to MDM Server(also known as delta processing). There are three main steps in such a composite service:

    1. Find the party and query its details.2. Process business rules that compare the incoming update details to the party retrieved from

    the database and determine how to apply the update.3. Update the party (by invoking the UpdateParty service).

    The following are some considerations from a performance perspective:

    In the first step, query only the data you need. For example, if only an address change isbeing processed by the composite, there is usually no need to retrieve the entire party. This iswhere inquiry levels can be leveraged.

    In the third step, update only the data required. If additional data is provided to UpdatePartythat you know is redundant and does not require updating, it forces UpdateParty to re-discover this (by querying and processing that data).

    In the integration layer, try to process all changes to a single party as part of a singleMaintainParty service invocation. For example, if a party has a changed address andnew contact method, it is more efficient to process that as a single MaintainParty serviceinvocation than to process two separate invocations. Also, within the MaintainParty execution,it is most efficient to process both as part of a single UpdateParty call, especially if suspectduplicate processing is enabled, because then that process is executed only once in thescope of the transaction.

    Using notificationsIn order to facilitate integration with other applications in the surrounding enterprise, the MDMServer enables you to create notifications. While the notification system is functionally pliable,the most common form of notification ends up on the JMS topics configured during MDM Serverinstallation. While a best practices recommendation for implementing notifications is beyond thescope of this article, it is worth noting that MDM Server implementations should plan for capacityto support the topics that are using the JMS system queues to avoid critical bottlenecks. Generally,consider the following when using notifications:

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 12 of 23

    The system queue depth or the maximum sizeDepending on your messaging management system (such as WebSphere MQ or WebSphereSystem Integration Bus) and configuration, a full queue could result in discarded notificationsor in blocking transactions that are waiting to put a message on the queue.

    The size of your notification payloadDepending on your message management system configuration, large messages canbe rejected outright or can cause system degradation as the queue struggles to managethousands of larger messages. This is especially important for high-availability queues thatget written to a file system or database. If the notification messages are consistently large(over 1MB in payload), review the payload and consider sending primary keys to externalsystems so that they query the information they need from the MDM Server directly. Verylarge messages can also cause high memory usage and garbage collection delays in theMDM Server JVM.

    Concurrency in your architectureIf your notification processing is falling behind, consider tuning the WebSphere server toallow for more threads and connections to your queue system by setting a proper connectionpool for the Topic Connection Factory and the associated session pool. Also consider theprocessing capabilities of the applications consuming the notification messages, and ensurethat they do not become bottlenecks. The specifics for adjusting concurrency depend on yourmessaging system, so consult the appropriate product documentation.

    Using suspect duplicate processingSuspect duplicate processing (SDP) is an essential part of any MDM Server solution. Whenplanning for the functional concerns with this feature, serious consideration should be givento understanding the performance impact of your implementation choices. The best practicesfor using the SDP in your MDM Server solution is beyond the scope of this article. Instead thisarticle outlines the logical process and lists the major considerations with respect to applicationperformance.

    While there are many options and capabilities for the SDP in MDM Server, the goal is generally toidentify potentially duplicate parties and either collapse them into one party or record the matchand defer for later action. Once the SDP is invoked in MDM Server, it proceeds to do the following:

    Searches for potential candidates based on available critical data Retrieves the party information relevant to matching from all candidate parties Matches the original party to the candidates based on the rules and criteria Takes the action of recording suspects and/or collapsing the party

    This process becomes slightly more complicated with data standardization, probabilistic matching,or customized search and match rules. However, in all cases, consider the following points:

    The cost and overhead of the SDP varies greatly, depending on the specific implementation.However, it frequently takes longer than the actual add or update service that invokes it.Even though the MDM services are quite fast, it is important to understand that the SDP is asignificant part of your overall workload and therefore should be given appropriate attentionduring data analysis, development, and performance testing phases of your MDM Serverimplementation.

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 13 of 23

    The response time for SDP depends primarily on the efficiency of the candidate search andthe resulting number of candidates the search returns. The candidate search is consideredefficient (both from a functional and performance perspective) if the parties returned aresuccessfully matched as relevant suspects that result in a collapse or are of relative interestin data stewardship. An inefficient search returns candidate parties that are too numerous ornot similar enough to be relevant. For example, if a lot of parties contain dummy data for acritical data item such as 0000000 for a social security identifier, the resulting search mightreturn many potential candidates that would probably not be relevant suspects. Becauseall of the returned candidates need to have their party information queried, many inefficientsearches can significantly add overhead to your MDM Server workload. This has an impacton the overall response time and the data quality of your implementation.

    Whenever processing a continuous load of updates, such as a delta workload, executeSDP asynchronously or in a specific batch window using the MDM Server evergreeningprocess. There are many benefits to both data quality and performance with this approach.By separating out the SDP work from the regular MDM add and update services, fewer locksare held per transaction, which reduces the concurrency overhead on a busy database thatresults in faster completion times for both the add/update workload and for the SDP workload.Thus, the lock contention is significantly reduced in your database, and your applicationserver has shorter and more discrete units of work, which reduces CPU and JVM memoryoverhead.

    Tuning and monitoring WebSphere for MDM Server

    Optimal WebSphere Application Server software settings vary depending on your implementationand your workload. These settings come from proactive monitoring and continuous tunings asapplications and user behaviors change. Consider the following fundamental tuning parameters foreach MDM WebSphere Application Server instance:

    Start the minimum heap size of 512 MB and maximum heap size of 1024 MB for an MDMapplication server instance. If you are using a 64-bit WebSphere Application Server instance,you can increase it to a much higher size. Normally heap size does not need to go beyond2 GB for most MDM implementations, therefore heap size should not be the reason forusing 64-bit in this case. Also, be aware that using a 64-bit WebSphere Application Serverinstance has an overhead of 5% to 15% in terms of transaction throughput, compared withusing 32-bit, depending on your workload. Reported overhead was measured from running arepresentative MDM Server workload with WebSphere Versions 7.0 and 6.1. The overhead islower in WebSphere Version 7 than in WebSphere Version 6.

    Ensure that the Object Request Broker (ORB) thread pool is equal to the maximum amountof concurrent direct calls to the MDM ServiceController stateless session bean (RMItransactions) expected for each MDM Server instance. The maximum thread pool limit is yourtheoretical maximum number of concurrent users.

    Ensure that the Web container thread pool is tuned to a high enough level to accommodateWeb service requests, if applicable for your application. In most cases, this thread pool doesnot need to be adjusted because Web container threads are non-blocking and can handlemultiple MDM service requests concurrently.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 14 of 23

    Start the Enterprise JavaBeans (EJB) cache size at 3500. Adjust as needed to reach theoptimal setting for MDM Server.

    On the Java Database Connectivity (JDBC) data source used for MDM Server, start thePrepared Statement cache size at 300, which can improve transaction response time by 5%to 15%.

    Reduce input/output activities by using the default logging level (or lower) that WebSpheresets.

    See Resources for reference information.

    Generating JVM heap dumpsGenerate JVM heap dumps manually using one of the following methods:

    IBM_HEAPDUMP=true;Generates heap dump in .phd format used for newer memory tools

    IBM_JAVA_HEAPDUMP_TEXT=trueGenerates classic heap dump format for older tools

    SIGQUIT (kill -3 on UNIX; Ctrl+Break on Windows)You can also set up WebSphere Version 6 to automate heap dump generation. This enablesthe best method to analyze memory leak problems on AIX, Linux, and Windows operatingsystems. Manually generating heap dumps at appropriate times can be difficult. To help youanalyze memory leak problems when memory leak detection occurs, some automated heap dumpgeneration support is available. This functionality is available only for IBM Software DevelopmentKit on AIX, Linux, and Windows operating systems.

    Most memory leak analysis tools perform some form of difference-evaluation on two heap dumps.Upon detection of a suspicious memory situation, two heap dumps are automatically generated atappropriate times. The general idea is to take an initial heap dump as soon as problem detectionoccurs. Monitor the memory usage and take another heap dump when you determine thatenough memory is leaked so that you can compare the heap dumps to find the source of the leak.Complete the following steps for WebSphere Version 6:

    1. Click Servers > Application servers in the administrative console navigation tree.2. Click server_name >Performance and Diagnostic Advisor Configuration.3. Click the Runtime tab.4. Select the Enable automatic heap dump collection check box.5. Click OK.

    You can then use HeapRoots, HeapAnalyzer, or IBM Rational Application Developer to analyze thegenerated heap dumps (see Resources for more details on each of these tools).

    Generating verbose GCVerbose GC output helps tune and debug many performance issues, including the following:

    Finding optimal heap size

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 15 of 23

    Finding optimal gc policy Determining potential memory exhaustion and leak causes

    You can use the IBM Pattern Modeling and Analysis Tool (PMAT) for Java Garbage Collector(see Resources) to view the verbosegc output. To enable verbosegc on WebSphere Version 6,complete the following steps:

    1. In the Administrative Console, expand Servers and then click on Application Servers.2. Click on the server that is encountering the OutOfMemory condition.3. On the Configuration tab, under Server Infrastructure, expand Java and Process

    Management, and click Process Definition.4. Under the Additional Properties section, click Java Virtual Machine.5. Click the Verbose garbage collection check box.6. Click Apply.7. At the top of the Administrative Client, click Save to apply changes to the master

    configuration.8. Stop and restart the Application Server.9. The verbose garbage collection output is written to either native_stderr.log or

    native_stdout.log for the Application Server, depending on the SDK operating system. ForAIX, Microsoft Windows, or Linux, the output is in native_stderr.log. For Solaris or HP-UX,the output is in native_stdout.log.

    Using PMI metrics

    The following information provides the metrics for the WebSphere - PMI (PerformanceMeasurement Infrastructure) data type and for the WebSphere Version 6 Application Servers itsupports. The tables provide information for the following modules:

    Database connection pools Enterprise Java beans JCA connection pools JTA transactions JVM/systems ORB detail/interceptor JCA connection pools Web applications Session manager Thread pools

    Not all modules need to be monitored, nor are they all implemented in MDM Server. Tables 1-4show the important key PMI metrics that you should be monitoring for tuning and how they can beused for troubleshooting your MDM Server performance problems.

    Table 1. Database connection pool metrics

    Property Description

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 16 of 23

    Avg. waiting threads Threads waiting for a connection to the database. This number shouldbe near 0 for optimal performance.

    Avg. wait time Related to the number above. If no threads are waiting, this numbershould be 0.

    Avg. pool size Connection pool size actually used if overflow is enabled. This numberclosely matches the realistic number of connections at a minimum.

    Table 2. JVM metrics

    Property Description

    Free memory (JVM) Free heap. If number is near 0, consider increasing the heap size.

    Total memory Maximum heap size

    Free memory (system) What is available from the operating system point of view. This is animportant indicator whether the operating system requires more RAM.

    Avg. CPU usage Overall CPU usage in the system. Ideally CPU should be the firstbottleneck, and if you cannot increase CPU usage, there might be otherbottlenecks, such as disk input/output, network input/output, or memory.

    Table 3. Web application metrics

    Property Description

    Response time Time in milliseconds taken for a transaction to complete. Depending onthe transaction, it may or may not be acceptable.

    # of errors Failed transactions can suggest connection timeouts, application errors/exceptions, or other environment failures.

    Total requests Requests received by WebSphere, which are a good indication ofwhether traffic is coming in as expected.

    Concurrent requests Parallel requests given any time, which is an indication of the number ofonline users at a time.

    Table 4. Session manager metrics

    Property Description

    Created sessions Historic count of sessions in a live JVM.

    Invalidated sessions Historic count of sessions that have expired or timed out.

    Live sessions Current sessions that have not expired.

    Session lifetime Average time a session lived. This is a good optimal number to set as asession time-out value.

    Active sessions Sessions that carry transactions.

    Table 5. Thread pool metrics

    Property Description

    Thread creates Number of new threads created in various thread pools: web container,default, and ORB.

    Thread destroys Number of threads destroyed,

    Active threads Current active threads that perform tasks.

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 17 of 23

    Pool size Thread pool size that allows reuse of threads in various WebSpherethread pools.

    % time max in use Low % means threads finish tasks quickly but have to wait out theinactivity before they are reclaimed. You should shorten the time-outvalue for inactivity to put the threads back to the pool sooner.

    Troubleshooting

    This section describes an actual case study that describes how the WebSphere monitoring tools inthis article can be used to troubleshoot a heap exhaustion problem.

    The original claim and compliant is: On this production system, there are object requests forabout 170 to 180 MB, which causes the heap size to increase rapidly and causes the systemto fail. Once the problem was truly identified as heap exhaustion, the following steps led to theidentification of the cause.

    1. Validate from the WebSphere application logs that a heap dump has occurred. Listing 6shows an example of the native.log file.

    Listing 6. Sample native.log entry showing a heap dump occurs

    [Thu Jul 23 04:08:47 2009] JVMDG217: Dump Handler is Processing Signal 11- Please Wait. Explanation: A signal has been raised and is being processedby the dump handler. System action: At this point, depending upon theoptions that have been set, Javadump, core dump, and CEEDUMP (z/OS only)can be taken. This message is for information only and does not indicatea further failure. User response: None.

    [Thu Jul 23 04:08:47 2009] JVMDBG001: malloc failed to allocate 2621440bytes, time: Thu Jul 23 04:08:48 2009 Explanation: The system mallocfunction has returned NULL System action: None for this message. TheJVM might issue a further message. User response: Increase the availablenative storage. [Thu Jul 23 04:08:47 2009] JVMDBG001: malloc failedto allocate 2097152 bytes, time: Thu Jul 23 04:08:48 2009 Explanation:The system malloc function has returned NULL System action: None forthis message. The JVM might issue a further message.

    User response: Increase the available native storage.

    2. Gather verbosegc and heap dumps at a regular interval. To identify heap growth, takeseveral heap dumps, including one from the beginning of the test (when no heap overflow isoccurring) to when heap overflow occurs. The verbosegc output can be viewed with the PMATtool, and it shows heap usage over time. Figure 2 shows that for the case study, heap usagespikes occurred during the test.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 18 of 23

    Figure 2. Sample verbosegc analysis output

    Similar analyses of heap dumps taken at non-peak times show no spikes in the JVM usage, andthey remained at a maximum of 400 MB. But, when the heap usage was very high, HeapAnalyzershowed that TCRMContractBObj used close to 1GB of heap, as shown in Figure 3.

    Figure 3. Sample heap dump analysis

    In the tree view of the heap in Figure 2, there are 40,252 objects named TCRMContractBObj, eachtaking up close to 30 KB of memory in heap. The cumulative affect is 909 MB of heap usage. Youcan surmise that this is the root cause of the heap overflow.

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 19 of 23

    3. Review the transaction to see that a single getFSParty call inside a business proxy actuallyqueries a particular party with well over 40,000 contract objects. The recommendation is toimplement more sophisticated retrieval logic for elite clients with large number of contracts.That resolved the issue permanently.

    Tuning the JVM

    There is no such a thing as the best setting for MDM Server. For WebSphere Version 6.1 (used forMDM 8.0 and later), Table 6 gives the recommended start values for AIX. You should adjust themaccordingly.

    Table 6. Start values for tuning the JVM

    Tuning parameter Value recommended for AIX

    JVM initial heap size (MB) 512

    JVM maximum heap size (MB) 1024

    GC policy recommendations -Xgcpolicy:gencon if SMP systems are used (as in most of the cases)

    Minimum active threads (in pool) 70 threads

    Maximum active threads (in pool) 70 threads

    Allow threads allocated beyond maximum No

    Thread inactivity timeout 100 seconds

    Maximum in-memory session count 1000

    Session timeout 10 minutes

    JDBC connection pool connection timeout 10000 seconds

    JDBC connection pool min connections 30 : need to adjust database server settings accordingly

    JDBC connection pool max connections 100 : need to adjust database server settings accordingly

    JDBC connection pool connection timeout 10000 seconds

    RC4 and MD5 encryption Disable if WebSphere Application Servers are deployed in a secureenvironment

    Following are some notes about some of the start values:

    Setting the JVM heap size larger than 512MBFor the best and most consistent throughput, set the (-Xms) starting minimum and the (-Xmx)maximum to be the same size. Also, remember that the value for the JVM heap size is directlyrelated to the amount of physical memory for the system. Never set the JVM heap size largerthan the physical memory on the system to avoid the disk input/output caused by swapping.

    Session timeout 10 minutesThe default value of session timeout is 30 minutes. Reducing this value to a lower numbercan help reduce memory consumption requirements, allowing a higher user load to besustained for longer periods of time. Reducing the value too much can interfere with the userexperience. You need to determine the right level based on your end-user requirements: ifusers mostly have quick tasks to finish, set this value low; if users have long tasks to finish,set this value higher.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 20 of 23

    Class garbage collectionXgcpolicy:gencon handles short-lived objects differently than objects that are long-lived. ManyMDM workloads generate many short-lived objects, which have shorter pause times withthis policy. This parameter could allow approximately a 10% increase in throughput for manyMDM workloads.

    Servlet engine thread pool size 70For the use case, 70 was used for both the minimum and maximum settings. Ideally, set thisvalue and monitor the results using PMI. Increase this value if all the servlet threads are busymost of the time.

    JDBC connection pool maxDepending on how many concurrent users are expected and on how many JVMs are usedas the MDM application server, a data source connection pool is used to buffer the user loadand provide efficiency in establishing database connections for queries. The sum of all of maxconnections for the JVMs should not exceed that of the database engine, which MAXAPPLSin DB2 UDB specifies.

    Disable encryptionCommunications between EJBs can be configured to use SSL, but they are not requiredif the MDM application server already is in a secure environment (as most are). Not usingencryption and decryption can save about 15% CPU processing power.

    Conclusion

    This article highlighted the importance of the following:

    Understanding the big picture Setting up clear performance goals Knowing your environment Establishing a repeatable and consistent performance testing process.

    Important guidelines describe how to effectively monitor, tune, and optimize MDM Server andthe WebSphere Application layer for optimal performance, with brief discussions around bestpractices for a list of key areas and features that affect performance in these two layers. Part 2 ofthe series describes the lower parts of the stack, covering guidelines and recommendations onhow to effectively monitor and tune the DB2 layer. Part 2 also presents tools commonly used tomonitor operating system-level resources.

    Acknowledgments

    We would like to thank David Borean, Lena Woolf, Bill Xu, Steve Reese, and Berni Schiefer fortheir input and suggestions for this article series.

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 21 of 23

    Resources

    Learn

    Go to Part 2 of the series for details about the lower parts of the stack. See the IBM Redbook publication WebSphere Application Server V6 Scalability and

    Performance Handbook for more information about WebSphere Application Server tuning. See the WebSphere Application Server 6.0 Information Center for more information about all

    WebSphere tuning parameters and the meaning of each. Find out more from this Redpaper about understanding MDM Server performance. Consult the MDM Must Gather procedures to help with troubleshooting. Find out more about ARM 4.0 compliance. Refer to DB2 UDB Performance Tuning Scenarios. Check out the IBM Redbook New Mainframe: z/OS Basics. Best practices: Tuning and monitoring database system performance Use WebSphere Application Server Version 6 Scalability and Performance Handbook as a

    reference. Get more details in DB2 UDB for z/OS Version 8 Performance Topics. Review the MDM Server InfoCenter. Browse the IBM Redpaper WebSphere Customer Center: Understanding Performance. Learn to load a large volume of Master Data Management data quickly using RDP MDM

    Server maintenance services batch. Review DB2 tuning tips for OLTP applications. Learn more about Information Management at the developerWorks Information Management

    zone. Find technical documentation, how-to articles, education, downloads, productinformation, and more.

    Stay current with developerWorks technical events and webcasts. Follow developerWorks on Twitter.

    Get products and technologies

    Use the IBM Pattern Modeling and Analysis Tool (PMAT) for Java Garbage Collector to viewthe verbosegc output.

    Build your next development project with IBM trial software, available for download directlyfrom developerWorks.

    Discuss

    Participate in the discussion forum for this content. Check out the developerWorks blogs and get involved in the developerWorks community.

  • developerWorks ibm.com/developerWorks/

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 22 of 23

    About the authors

    Jesse Chen

    Jesse F Chen is a senior software engineer in the Information Managementgroup with a focus on software performance and scalability. He has 12 years of ITexperience in software development and services. His recent projects include FileNetand DB2 integration and optimization, Master Data Management, and WebSphereProduct Center. Jesse is also a frequent contributor and speaker at multiple ECMconferences on topics of performance, scalability, and capacity planning. He earneda Bachelor's degree in Electrical Engineering and Computer Science from theUniversity of California, at Berkeley, and a M.S. degree in Management in Informationand Systems from New York University.

    Yongli An

    Yongli An is an experienced performance engineer focusing on Master DataManagement products and solutions. He is also experienced in DB2 database serverand WebSphere performance tuning and benchmarking. He is an IBM CertifiedApplication Developer and Database Administrator - DB2 for Linux, UNIX, andWindows. He joined IBM in 1998. He holds a bachelor degree in Computer Scienceand Engineering and a Masters degree in Computer Science. Currently Yongli is themanager of the MDM performance and benchmarks team, focusing on Master DataManagement Server performance and benchmarks, and helping customers achieveoptimal performance for their MDM systems.

    Adam Muise

    Adam Muise is a Senior Consultant for IBM InfoSphere products in the InformationManagement Technology Ecosystem. Here he enables IBM Business Partners tosell and implement InfoSphere software, with a focus on Master Data Managementsolutions. In his previous role at IBM, he was a founding member of the MDMPerformance and Benchmarks team. Before joining IBM in 2006, Adam worked as aSenior Consultant for Performance and High Availability at Hewlett Packard and as aPerformance Specialist at CIBC. Adam holds an honors degree from York Universityin Information Technology and Digital Humanities.

    Copyright IBM Corporation 2010(www.ibm.com/legal/copytrade.shtml)Trademarks(www.ibm.com/developerworks/ibm/trademarks/)

  • ibm.com/developerWorks/ developerWorks

    Monitoring and tuning InfoSphere Master Data ManagementServer, Part 1: Set goals and tune each layer of the infrastructure

    Page 23 of 23

    Table of ContentsIntroductionPrerequisites

    Understanding the big pictureSet performance goalsKnow the environmentKnow the workloadEstablishing a performance tuning process

    Monitoring and tuning MDM ServerEnabling MDM Server performance trackingProfiling and problem resolutionConfiguring logging levelsUsing TAILUsing inquiry levelsUsing smart inquiriesUsing historyTypes of data extensionsMaintainParty composite serviceUsing notificationsUsing suspect duplicate processing

    Tuning and monitoring WebSphere for MDM ServerGenerating JVM heap dumpsGenerating verbose GCUsing PMI metricsTroubleshootingTuning the JVM

    ConclusionAcknowledgments

    ResourcesAbout the authorsTrademarks