performance key patterns and anti-patterns · performance key patterns and anti-patterns...

Post on 31-May-2020

21 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Performance Key Patternsand Anti-patterns

• Efficient performance patterns we have seen project teams implement successfully

• Describe anti-patterns that can lead to serious performance degradation

These patterns are a result of proactive performance reviews done within Solution Architecture team

Session Overview

Session Overview

Patterns

• Tune DMF for bulk

import/exports

• Tune batch framework

• Have proper indexes in place

• Use number sequence pre-

allocation

• Use Data Caching correctly

• Use generic methods wisely

• Regular execute cleanup

routines

• Stay current on hotfixes

• Execute regular index

maintenance

Anti-patterns

• Misuse OData

• Run expensive record-by-

record operations

• Refresh Aggregate

measurements not being

used

• Use plan guides as first

mitigation step

DO Tune Data Management Framework (DMF) for bulk import/exports

Maximize performance

by parallelism

Disable unnecessary

validation /

configuration Keys

Set number sequence

pre-allocationCleanup staging tables

Enable set-based

processingUse delta loading

DO Tune DMF for bulk import/exports –Data entities settings

DO Tune DMF for bulk import/exports – Framework parameters

• OData is natively not made for handling large volumes

• Large interfaces built on OData lead to time-out and very slow processing

• Limit the use of OData to only when it’s necessary

• Primarily real-time calls

• Don’t use the OData Connector for PowerBI Reports

• It is a direct performance hit to the AOS and the database

• Prefer the Entity store (Embedded Power BI) and BYOD (for data enrichment/mash-up requirements)

• Prefer the Data Management Framework (DMF) for large bulk imports/exports

• The framework is designed for performance

• DMF will maximize the throughput while using minimal resources

DO NOT Misuse OData

DO NOT Misuse OData (Bulk imports/exports)

OData for bulk

imports/exports

DMF for bulk

imports/exports

D365FO

D365FO

For reference volumes, see https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/data-

entities/integration-overview?toc=/fin-and-ops/toc.json

DO NOT Misuse OData (PowerBI reports)

OData connector(impacts AOS and Transaction DB)

Bring Your Own

Database (BYOD)

D365FO

AOSAXDB

(Transaction DB)

External Azure subscription

Entity store

(direct query)

Embedded Power BI

AXDW

(entity store)

Customer Azure SQL database

DO Tune batch framework

• Create a 24-hour timetable to get an overview which heavy (batch)

processes are running at a specific timeframe

• Define different batch groups and assign batch server(s) to each batch

group to balance batch load across AOS servers• System administration > Setup > Batch group

• Empty batch group should be reserved for system batch jobs only!

• Assign each batch job to appropriate batch group

DO Tune batch framework

• Define different active periods and assign batch jobs to decide in which

time of the day the batch job can start (and when it must not)• System administration > Setup > Active periods for batch jobs

• Especially for batch jobs that have a high recurrence, but not required all day long

• Assign each batch job to appropriate active periods

Note: This feature is available as of Platform Update 21: https://docs.microsoft.com/en-us/dynamics365/unified-

operations/dev-itpro/sysadmin/activeperiod

DO Tune batch framework

• Tune Maximum batch threads (maxbatchsessions)

to maximize utilization of each AOS server • System administration > Setup > Server configuration

• Test optimized Maximum batch threads in performance

sandbox environment first taking interactive user workload

and other processes into account

• Make sure each heavy batch process is designed to run in parallel• Batch Bundling

• https://blogs.msdn.microsoft.com/axperf/2012/02/24/batch-parallelism-in-ax-part-i/

• Individual task modeling

• https://blogs.msdn.microsoft.com/axperf/2012/02/24/batch-parallelism-in-ax-part-ii/

• Top Picking• https://blogs.msdn.microsoft.com/axperf/2012/02/28/batch-parallelism-in-ax-part-iii/

• Comparison of the three techniques

• https://blogs.msdn.microsoft.com/axperf/2012/03/01/batch-parallelism-in-ax-part-iv/

DO have proper indexes in place

SELECT ACCOUNTNUMFROM CUSTTABLEWHERE CUSTTABLE.CUSTGROUP = '40';

Query without

proper index in

place

Query with

proper index in

place

… 20 90 40 80 80 50 40 10 …

CREATE NONCLUSTERED INDEX MSFT_Perf_CustGroupIdx ON [CUSTTABLE] ([CUSTGROUP],[DATAAREAID], [PARTITION])

SELECT ACCOUNTNUMFROM CUSTTABLEWHERE CUSTTABLE.CUSTGROUP = '40';

… 10 20 40 40 50 80 80 90 …

DO Use number sequence pre-allocation

If NOT used it results in:• Higher number of database lookups

• Possible lock escalations NumberSequenceTable

• Not using cache capabilities on AOS

• Reduced performance

Advice: • 100 might be appropriate when 75,000+ numbers are being used each day

• 20-50 might be appropriate when 25,000+ numbers are being used each day

• 10 might be appropriate when 10,000+ numbers are being used each day

DO Use number sequence pre-allocation

DO NOT Run expensive record-by-record operations

NOT SET-BASED

static void UpdateRecordset(Args _args){

TableRegular tr;

// not set-based -> high number of roundtrips to database

ttsBegin;while select forupdate tr

where Tr.Num < 1000{

Tr.Value = "a";Tr.update();

}ttsCommit;

}

SET-BASED

static void UpdateRecordset(Args _args){

TableRegular tr;

// set-based -> reduced number of roundtrips to database

ttsBegin;update_recordset Tr setting Value = "b"where Tr.Num < 1000;

ttsCommit;}

DO NOT Run expensive record-by-record operations

NOT SET-BASED SET-BASED

DO Use Data Caching correctly

Set appropriate

CacheLookup

value for each

table

Verify at least one

unique index

exists

(AllowDuplicates

to No)

Verify

PrimaryIndex

property is set to

a unique index

DO Use Data Caching correctly

NOT USING DATA CACHE

CustTable custTable; str 10 singletonValue = "4001"; // custTable.AccountNum field value int i, nLoops = 65536;

custTable.disableCache(true); // do NOT use the cache, took 148 seconds to complete

for (i=0; i < nLoops; i++) {

select * from custTable where custTable.AccountNum == singletonValue; // Unique index exists on this field

}

USING DATA CACHE

CustTable ; str 10 singletonValue = "4001"; // custTable.AccountNum field value int i, nLoops = 65536;

custTable.disableCache(false); // Use the cache, took 1 second to complete

for (i=0; i < nLoops; i++) {

select * from custTable where custTable.AccountNum == singletonValue; // Unique index exists on this field

}

DO Use Data Caching correctly

NOT USING DATA CACHE

USING DATA CACHE

DO Use generic methods wisely e.g. find()

NOT USING GENERIC METHODS WISELYCustTable ; str 10 singletonValue = "4001"; // custTable.AccountNum field value str 50 A,B,C,D; // Just fieldsint i, nLoops = 65536; custTable.disableCache(false); // do use the cache, took 7 seconds to complete for (i=0; i < nLoops; i++) { A = CustTable::find(singletonValue).AgencyLocationCode; // generic method helps, but causing overhead

B = CustTable::find(singletonValue).BankAccount;C = CustTable::find(singletonValue).CashDisc;D = CustTable::find(singletonValue).DefaultInventStatusId;

}

USING GENERIC METHODS WISELYCustTable ; str 10 singletonValue = "4001"; // custTable.AccountNum field value str 50 A,B,C,D; // Just fieldsint i, nLoops = 65536; custTable.disableCache(false); // do use the cache, took 1.6 seconds to complete for (i=0; i < nLoops; i++) { custTable = CustTable::find(singletonValue); // generic method helps, use where appropriate, or even query with field list// select AgencyLocationCode, BankAccount, CashDisc, DefaultInventStatusId from custTable where custTable.AccountNum == singletonValue

A = custTable.AgencyLocationCode;B = custTable.BankAccount;C = custTable.CashDisc;D = custTable.DefaultInventStatusId;

}

DO Use generic methods wisely e. g. find()

NOT USING GENERIC METHODS WISELY

USING GENERIC METHODS WISELY

• Batch history tables (BatchHistory, BatchJobHistory, BatchConstraintsHistory)

• System administration > Periodic tasks > Batch job history clean-up

• Notification tables (EventInbox, EventInboxData)

• System administration > Periodic tasks > Notification clean up

• DMF Staging tables

• Data management workspace > “Staging cleanup” tile

• Journal cleanup routines

• General Ledger > Periodic tasks > Clean up ledger journals

• Inventory management > Periodic tasks > Clean up > Inventory journals cleanup

• Production control > Periodic tasks > Clean up > Production journals cleanup

• https://blogs.msdn.microsoft.com/axsa/2018/09/05/cleanup-routines-in-dynamics-365-for-finance-and-operations/

DO Regular execute cleanup routines

DO Regular execute cleanup routines

Wave batch cleanup

Cycle count plan

cleanup

Mobile device activity

log cleanup

Work user session log

cleanup

Containerization history

purge

Work creation history

purge

Warehouse management

DO Regular execute cleanup routines

Calculation of location

load adjustments

Inventory dimensions

cleanup

(Warehouse

management) on-hand

entries cleanup

On-hand entries

aggregation by financial

dimensions

Inventory settlements

cleanup

Inventory journals

cleanup

Inventory and warehouse management

• Review reports and identify Aggregate measurements to be refreshed

• Split Aggregate measurements into following categories

• Measurements not being used in PowerBI reports These should not be processed

• Measurements for which customer wants the data updated frequently

• Measurements for which customer does not need data updated frequently

• Cancel current scheduled “Deploy measurement” batch jobs

• Recreate batch jobs from Aggregate measurements

• System administration > Setup > Entity Store

• Select measurements, click refresh to start a new batch job instance with recurrence set to desired frequency

DO NOT Refresh Aggregate measurements not being used

• Hotfixes (X++ and Binary updates) delivered by Microsoft

• Fix standard product bugs

• Improve performance/stability

• Improve customizability (e.g. new extension points)

• Define a periodic process to include them in releases

• Include Binary and Critical X++ updates (visible in LCS environment detail page)

• Staying current on hotfixes will limit the risk of facing

• Standard bugs

• Crashes

• Performance issues

• Last minute notice for missing extension points

DO Stay current on hotfixes

• Index fragmentation has not same impact as past due to improvements in storage technologies

• For some indexes fragmentation is still impactful

• These indexes are difficult to rebuild automatically by Microsoft as it can impact critical workloads

• Current options for doing index maintenance

• LCS Environment Monitoring > SQL Insights

• Queries > Get fragmentation details: To get overview of index fragmentation for each index

• Actions > Rebuild index: Self-service, no scheduling capability, rebuild index one-by-one

NOTE: this is not an option for a regular index maintenance but very useful in a troubleshooting scenario where e.g. query has been identified causing a lot of IO even proper index is in place

• System batch job (PU22+)

• Scheduling capability based on business rhythm, index maintenance based on parameter settings

DO Execute regular index maintenance

DO Execute index maintenance – LCS > SQL Insights

DO Execute index maintenance - LCS > SQL Insights

DO Execute index maintenance - LCS > SQL Insights

DO Execute regular index maintenance - System batch job

• System administration > Inquiries > Batch jobs & Batch job history

DO Execute regular index maintenance - System batch job

• System administration > Inquiries > Database > SQL index fragmentation details

DO Execute regular index maintenance - System batch job

• System administration > Setup > System job parameters

• As a first step… try to tune expensive code / queries

• Add/change indexes

• Increase selectivity

• Add hints

• Rebuild indexes

• Update statistics

• Apply other code changes (e.g. change pattern)

• Use plan guides as a last resort in mitigating the performance issue

DO NOT Use plan guides as first mitigation step

Example: plan guides can be a solution to overcome parameter sniffing

• SQL will recompute a new plan each time the plan cache is flushed, such as when an update of statistics runs

• The plan that is chosen is based on "sniffing" the parameters of the first execution of that query. After that, the same plan will be used, regardless of parameters

• For this reason, the same query might sometimes get multiple plans over time, some of which are far worse than others for given data distributions

DO NOT Use plan guides as first mitigation step

• Create a plan guide to force Plan ID

• Retrieve plan IDs and execution statistics for a specific query ID

• Download and analyze SQL plan for each ID

• Analyze and determine best plan

• Test your solution

• LCS > SQL Insights > Actions: Create a plan guide to force Plan ID: This will force this plan to be used by creating plan guide

NOTE: this action applies only to the database that it is executed on

https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/lifecycle-services/querycookbook

DO NOT Use plan guides as first mitigation step

DO NOT Use plan guides as first mitigation step

DO NOT Use plan guides as first mitigation step

• Create a plan guide to add table hints

• Retrieve plan IDs and execution statistics for a specific query ID

• Download and analyze SQL plan for each ID

• Usually, table hints are determined after looking through multiple different query plans for a given query. For example, if an index seek on a table always outperforms a scan, it might be beneficial to add a FORCESEEK hint.

• Test your solution

• LCS > SQL Insights > Actions: Create a plan guide to add table hints: Install a plan guide that will add those table hints to future executions of the query

NOTE: this action applies only to the database that it is executed on

https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/lifecycle-services/querycookbook

© Copyright Microsoft Corporation. All rights reserved.

Q & A

© Copyright Microsoft Corporation. All rights reserved.

Thank you.

top related