cen com it met abap recommending performances v2
DESCRIPTION
CEN Com IT MET ABAP Recommending Performances V2TRANSCRIPT
KHEOPS CORE MODEL INTERNATIONAL
KHEOPS CORE MODEL INTERNATIONAL
ABAP Recommended performancesDocument information :
TeamTechnical - Development
ResponsiblePhilippe Bride
VersionV2
Last modification on 02/09/2004
StatusValidated
VersionDateAuthorModification
TABLE OF CONTENTS
41.Introduction
52.Notations
52.1.Pictograms
52.2.Notations in the algorithms
63.General rules
63.1.Database server/Application server
63.2.Data loading in internal table
63.3.Test of the return code
63.4.Authorizedtools/Forbidden tools
63.5.Query
74.Selection on one table
74.1.Selected fields
74.2.The clause
84.3.The instruction
124.4.Information structures
135.Selection on many tables
135.1.Logical database (LDB)
135.2.Nested select
145.3.Sub-query
145.4.Joins
155.5.Stocking in an internal table thanks to a
175.6.View
185.7.Using recommendations
206.Commit and Rollback
217.Data dictionary
217.1.Table creation
227.2.Buffering
227.3.Indexing
238.Mass Treatments
238.1.Deletion
258.2.Modification
268.3.Insertion
279.Remarks on some instructions
279.1. instruction
289.2.Declaration of an internal table
289.3.Loop
299.4.Nested loops
359.5.Form/ Perform
3610.High-level performance
3610.1.Hashed tables
3610.2.Buffering
3710.3.Indexing
3710.4.PERFORM
3710.5.Use of the database server
1. Introduction
The objective of this documentation is not to provide solutions regarding performance problems from an ABAP point of view but to suggest some methods, which will improve the execution run time of specific programs.
In the development phase, the programmer must always keep in mind that the way in which he or she codes has a direct impact on the execution run time.
Considering significant volumes of the databases at EUROVIA, the least access "badly coded" on a table, for example MSEG, can appear catastrophic. Indeed in most of the cases, accesses to the bases represent the major part of the total execution time of a program. That is why the developer must be rigorous when he must code readings, insertions, suppressions and modifications of recordings in the tables.
First of all, in Chapters 2 and 3, the notations used in this document and some general rules for developing are presented.
The two following chapters deal with data selection: on one table with the instruction, then on many tables with different loops and joins.
Chapter 6 treats the aspects of the and instructions.
Chapter 7 provides some advice about the data dictionary.
Chapter 8 explains how to carry out optimized mass treatments: deletion, modification, and insertion.
Chapter 9 brings several remarks on some instructions as the nested loops.
Last but not least, Chapter 10 provides some solutions to big problems of performance in order to make high-level performance.
THE PERFORMANCE OBJECTIVE IS NOT TO WRITE A PROGRAM QUICKLY BUT TO WRITE A PROGRAM CORRECTLY WHICH WILL BE MORE PERFORMING IN ITS EXECUTION TIME
2. Notations
2.1. Pictograms
High performance.
Tool to use in priority.
Low performance.
Tool to avoid except in some particular cases.
Very low performance.
Forbidden tool.
In some very particular cases, if the use of the tool seems essential, the developer must indicate it inside the program and talk about it with the Performance team before releasing the order.
Significant remark.
Trap to avoid.
Informations
2.2. Notations in the algorithms
NotationSignification
tableName of a transparent table (ORACLE table for example)
wt_tableName of an internal table
wa_Name of a work area
wc_constName of a constant
Name of a field symbol
ws_structName of an internal structure
field1, field2Name of a field of a table
keyName of a key of a table
s_Name of a Select Option
zv_viewName of a specific view
TreatmentTreatment carried out in a program
3. General rules
3.1. Database server/Application server
Even if working on the database server is often quicker then working on the application server, it is recommended to solicit the application server in order to avoid overloading the database sever. Therefore it is important to:
1. Repatriate the data from the database to the application server,
2. Work on the application server.
3.2. Data loading in internal table
When we use the instruction, exchanges between the database server and the application server are made by packets.
When we use the instruction, exchanges between the database server and the application server are one recording by one recording.
3.3. Test of the return code
It does not make sense to test the return code () inside some loops like and . Indeed if there is no recording, instructions in the loop are not executed.3.4. Authorizedtools/Forbidden tools
Authorized tools
SELECT SINGLE SELECT UP TO 1 ROWS
ENDSELECT
SELECT INTO TABLE wt_table
SELECT INTO TABLE wt_table FOR ALL ENTRIES
Forbidden tools (by descending order of prohibition)
SELECT ENDSELECT.Except in the case of only one record (SELECT UP TO 1 ROWS ENDSELECT.).
Logical databases
Nested select
GROUP BY ORDER BY []CORRESPONDING FIELDS OF [] DISTINCT
3.5. Query
Queries are forbidden because programs generated by a Query are not performing at all. Requests are not optimized, thus execution time is too high.4. Selection on one table
4.1. Selected fields
If we wish to select only one record in a table, and not the first record by order of the key, we will use an instruction, which does not loop on the table.
Use: field1 is not the primary key.Use: field1 is the primary key or a part of the primary key.
SELECT *
FROM table
WHERE field1 = xxx.
Treatment.
EXIT.
ENDSELECT.SELECT *
FROM table
UP TO 1 ROWS WHERE field1 = xxx.
Treatment.
ENDSELECT.SELECT SINGLE *
FROM table
WHERE field1 = xxx.
IF sy-subrc = 0.
Treatment.
ENDIF.
After many compared tests, it is proven that the instruction seems to be more efficient that one. During the use of a instruction, if the field used in the clause is only a part of the primary key, the Workbench displays a message.
Even if the instruction is very quick at coding, it unfortunately takes up a lot of memory space in most of the cases. It is recommended to select only the useful fields for the treatment.
When we are on the instruction which follows , we do not know if we have gone into the instruction or not. This implies that it is always necessary to test the return code after the instruction.
Except for the particular cases:
Dynpro updatingUse: Selection of a limited number of fields.
SELECT SINGLE *
FROM table
WHERE field2 = xxx.SELECT SINGLE field1 INTO table-field1
FROM table
WHERE field2 = xxx.
4.2. The clause
The order of the fields in the clause has an impact on the execution time of the request. SAP works with a positioning pointer on the tables. The first field of the clause permits the pointer to position itself. Then the table is sequentially read. Therefore the order of the tests in the clause is very important.
The developer must pay attention while he codes instructions.
When the developer codes an instruction in order to read in the database, he must respect the following rules by descending priority:
Put the fields in the same order of the key
Put the fields in the same order of the existing index
Put the tests the most restricted in first position
Remark: After a instruction, do not code a instruction on the fields of the table. It is better to insert these tests in the clause.
The test is in the clause.
SELECT *
INTO TABLE wt_table
FROM table
WHERE field1 = xxxx.
LOOP FROM wt_table
CHECK wt_table-field2 EQ zzz.
Treatment.
ENDLOOP.SELECT *
INTO TABLE wt_table
FROM table
WHERE field1 = xxxx
AND field2 EQ zzz.
IF sy-dbcnt NE 0.
LOOP AT wt_table.
Treatment.
ENDLOOP.
ENDIF.
4.3. The instruction
General rules
If we select all the fields of a table, it is not necessary to specify the option in the instruction. Indeed, SAP, which works only with buffers, will automatically fill in the buffer of the read table.
Not useful to specify the option.
SELECT SINGLE *
INTO table FROM table
WHERE field = xxxx.SELECT SINGLE *
FROM table
WHERE field = xxxx.
After a instruction, data are not systematically sorted in the key order. Hence data must be sorted in the case of a reuse. But non-justified redundant sorts are forbidden because they cost a lot in term of performance.
Data sorting
Sorting on the database server.Sorting on the application server.
SELECT *
FROM table
WHERE field1 = xxxx
ORDER BY field2.
Treatment.
EXIT.
ENDSELECT.SELECT *
INTO TABLE wt_table FROM table
WHERE field1 = xxxx.
IF sy-dbcnt NE 0.
SORT wt_table BY field2.
LOOP AT wt_table.
Treatment.
ENDLOOP.
ENDIF.
optionThis option permits to stock the selected columns in the variables defined by the developer.
Treatment without storing the data into a table.
SELECT SINGLE *
INTO table
FROM table
WHERE field1 = xxxx.
IF sy-dbcnt NE 0. wv_field2 = table-field2. Treatment.
ENDIF.SELECT SINGLE field2
INTO wv_field2
FROM table WHERE field1 = xxxx
IF sy-dbcnt NE 0.
Treatment.
ENDIF.
option
This option permits to stock many records in an internal table.
It is not necessary to carry out a of the internal table because it is automatically made by this option. Furthermore, the of the internal table is implicit.
There is no instruction because it is not a loop treatment. The storage of the table is carried out at one go.
option
This option permits to add further records to an internal specified table.
The instruction is included is the instruction. If the internal table is refreshed () just before using the instruction, it is the same as using the instruction. Therefore if the internal table is empty, the instruction must not be used.
There is no instruction because it is not a loop treatment. The updating of the table is carried out at one go.
options
If we want to stock the contents of the chosen fields in another field (different of the buffer), it must have the same structure as the chosen fields.
The instruction is equivalent to a one between the selected columns and the stocking fields.
Indeed, this option is used when there is a difference between the structure consisted of the fields of the and the target (in the ). In this case, the SAP process researches one after one the fields which have the same names and then make the allocation.
But it is better to put the fields of the instruction in the same order of the structure declaration or the internal table in order to be quicker.
Not recommended use of
Direct storage into a table or a structure, without using the fields.
SELECT SINGLE *
INTO CORRESPONDING FIELDS OF table
FROM table
WHERE key = xxxx.SELECT SINGLE *
FROM table
WHERE key = xxx.
Structure
DATA: BEGIN OF ws_struct,
field1 LIKE table-field1,
field2 LIKE table-field2,
field3 LIKE table-field3,
END OF ws_struct.
SELECT SINGLE *
INTO CORRESPONDING FIELDS OF ws_struct
FROM table WHERE field1 = xxxx.DATA: BEGIN OF ws_struct,
field1 LIKE table-field1,
field2 LIKE table-field2,
field3 LIKE table-field3,
END OF ws_struct.
SELECT SINGLE field1 field2 field3
INTO ws_struct
FROM table
WHERE field1 = xxxx.
Internal table
DATA: BEGIN OF wt_table OCCURS 0,
field1 LIKE table-field1,
field2 LIKE table-field2,
field3 LIKE table-field3,
END OF wt_table.
SELECT *
INTO CORRESPONDING FIELDS OF wt_table FROM table WHERE field1 = xxxx.
APPEND wt_table.
ENDSELECT.DATA: BEGIN OF wt_table OCCURS 0,
field1 LIKE table-field1,
field2 LIKE table-field2,
field3 LIKE table-field3,
END OF wt_table.
SELECT field1 field2 field3
INTO TABLE wt_table FROM table
WHERE field1 = xxxx.
SELECT *
INTO CORRESPONDING FIELDS OF TABLE wt_table
FROM table WHERE field1 = xxxx.SELECT field1 field2 field3
INTO TABLE wt_table
FROM table
WHERE field1 = xxxx.
There are several forms like and Example:
Given a transparent table with the fields field1, field2, field3, field4 and field5 in this order,
Given an internal table with the fields field1, field 3 and field 4 in this order,
We get:
REFRESH wt_table.
SELECT *
INTO CORRESPONDING FIELDS OF wt_table
FROM table
WHERE field1 = xxxx.
APPEND wt_table.
ENDSELECT.Use: Storage of data into a empty table..
If the table is not empty, data are zapped.
SELECT field1 field3 field4
INTO TABLE wt_table
FROM table
WHERE field1 = xxxx.
SELECT *
INTO CORRESPONDING FIELDS OF wt_table
FROM table
WHERE field1 = xxxx.
APPEND wt_table.
ENDSELECT.Use: Addition of data into a no-empty table
SELECT field1 field3 field4
APPENDING TABLE wt_table
FROM table
WHERE field1 = xxxx.
option
This option permits to research all the records of a table from the data of an internal table thanks to the links created in the clause.
To use the option:
The internal table must not be empty. The recordings must be sorted in order to delete the duplicates. The clause must consist of equalities.
If the internal table is empty, the instruction loops on all of the data of the selected table.
The deletion of the duplicates is compulsory for two reasons:
It enables to accelerate the execution of the instruction since the volume of the internal table is smaller.
It avoids some dumps of the programs because of a lack of memory in the table2 table (See the following example)
If we know that the internal table has duplicates, we have to copy the internal table, sort it and delete its duplicates.
- test the number of recordings
- sort the recordings
- delete the duplicates
SELECT *
INTO TABLE wt_table1
FROM table1
WHERE field1 IN s_field1.
SELECT *
INTO TABLE wt_table2
FROM table2
FOR ALL ENTRIES IN wt_table1
WHERE field1 = wt_table1-field1
AND field2 = xxxx.
Treatment.SELECT *
INTO TABLE wt_table1
FROM table
WHERE field1 IN s_field1.
IF sy-dbcnt NE 0.
SORT wt_table1 BY field1.
DELETE ADJACENT DUPLICATES FROM wt_table1
COMPARING field1.
SELECT *
INTO TABLE wt_table2
FROM table2
FOR ALL ENTRIES IN wt_table1
WHERE field1 = wt_table1-field1
AND field2 = xxxx.
Treatment.
ENDIF.
4.4. Information structures
The information structures are defined by customizing and are therefore created by the operating consultants. From a technical point of view, the information structures are only transparent tables and are always named .
The key of the information structure always begins by the same fields which are: MANDT, SSOUR, VRSIO, SPMON, SPTAG, SPWOC and SPBUP. The other fields of the key are those specified by the creating consultant of the information structure.
Furthermore, when an information structure is created, SAP automatically adds on:
An index on KUNNR if this field exists in the key
An index on MATNR if this field exists in the key
An index on PMNUX if this field exists in the key
If two of these fields exist in the key, SAP goes to add on two indexes one for each field.
The first index added on will be called the second and the third .
Looking at the contents of the information structure we can note that most of the keys predefined by SAP are practically never consulted.
Furthermore, we do not know why the readings on information structures are very costly in response time even if all the fields of the key are filled.
Example:
Given the information structure S631 daily movements defined in the following way:
Key: MANDT, SSOUR, VRSIO, SPMON, SPTAG, SPWOC, SPBUP, WERKS and MATNR
Supplementary Fields: PERIV, UWDAT, BASME, MZUBB, MAGBB
Index automatically added on by SAP:
VAB defined as: MANDT, MATNR
Index created for the performances:
A1: MANDT, VRSIO, WERKS, MATNR and SPTAG
Use of the created index
SELECT *
INTO TABLE wt_s631
FROM s631
WHERE ssour = space
AND vrsio = wv_vrsio
AND spmon = 000000
AND sptag >= wv_beginning
AND sptag = wv_beginning
AND sptag wt_table1-field1.
wv_index = sy-tabix.
EXIT.
ELSE IF wt_table2-field1 = wt_table1-field1.
Treatment.
ENDIF.
ENDLOOP.
Treatment.
ENDLOOP.
Nested LOOP FROM index with several key-fields
If several fields compose the common key of the two tables, it is necessary to write the program differently. In this case, the key is no longer composed of one field, but a sub-structure. It enables to make tests on the sub-structure as if it was one field.
Nested LOOP FROM index with several key fields
DATA: BEGIN OF wt_table1,
field1 LIKE ,
field2 LIKE ,
field3 LIKE ,
field4 LIKE ,
field5 LIKE ,
field6 LIKE ,
END OF wt_table1.
DATA: BEGIN OF wt_table2,
field1 LIKE ,
field2 LIKE ,
field3 LIKE ,
field4 LIKE ,
field5 LIKE ,
field6 LIKE ,
END OF wt_table2.
SORT wt_table1 BY field2 field4 field6.
SORT wt_table2 BY field2 field4 field6.
wv_index = 1.
LOOP AT wt_table1.
ws_struct1-field1 = field2.
ws_struct1-field2 = field4.
ws_struct1-field3 = field6. DATA: BEGIN OF ws_struct1,
field1 LIKE wt_table1-field2,
field2 LIKE wt_table1-field4,
field3 LIKE wt_table1-field6,
END OF ws_struct1.
DATA: BEGIN OF ws_struct2,
field1 LIKE wt_table2-field2,
field2 LIKE wt_table2-field4,
field3 LIKE wt_table2-field6,
END OF ws_struct2.
LOOP AT wt_table2 FROM wv_index.
ws_struct2-field1 = field2.
ws_struct2-field2 = field4.
ws_struct2-field3 = field6.
IF ws_struct1 = ws_struct2.
Treatment.
ELSEIF ws_struct2 > ws_struct1.
wv_index = sy-tabix.
Exit.
ENDIF.
ENDLOOP.
ENDLOOP.
ws_struct1 (and ws_struct2) must be the key of the table. But, if several records can have the same value for the field of the FROM clause, we must write:
Nested LOOP FROM index with several key fields.
Use: Several records can have the same value for the field of the FROM clause.
DATA: BEGIN OF wt_table1_key,
field1 LIKE table1-field1,
field2 LIKE table2-field2,
field3 LIKE table3-field3,
END OF wt_table1_key.
DATA: BEGIN OF wt_table1 OCCURS 0,
tab1_key LIKE wt_table1_key,
field4 LIKE table4-field4,
field5 LIKE table5-field5,
END OF wt_table1.
DATA: BEGIN OF wt_table2 OCCURS 0,
tab1_key LIKE wt_table1_key,
field6 LIKE table6-field6,
field7 LIKE table7-field7,
field8 LIKE table8-field8,
END OF wt_table2.
SORT wt_table1 BY tab1_key.
SORT wt_table2 BY tab1_key.
wv_index = 1.
LOOP AT wt_table1.
Treatment.
LOOP AT wt_table2 FROM wv_index.
IF wt_table2-tab1_key = wt_table1-tab1_key.
Treatment.
ELSEIF wt_table2-tab1_key > wt_table1-tab1_key.
wv_index_tmp = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
AT END OF tab1_key.
wv_index = wv_index_tmp.
ENDAT. Treatment.
ENDLOOP.
The index is here memorized in a temporary variable. The variable index is only updated when all the records having the same value for the field tab1-tab1_key have been processed.
or ?
If there is a common field to sort the tables, we use . Otherwise we use .
Nested LOOP FROM index
Use: Common fields between the tables LOOP AT NEW
READ TABLE BINARY SEARCH
Use: No common fields between the tables
SORT wt_table1 BY key.
SORT wt_table2 BY key.
wv_index = 1.
LOOP AT wt_table1.
Treatment.
LOOP AT wt_table2 FROM wv_index.
IF wt_table1-key = wt_table2-key. Treatment.
ELSEIF wt_table2-key > wt_table1-key.
wv_index = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
Treatment.
ENDLOOP.SORT wt_table2 BY field.
LOOP AT wt_table1.
AT NEW field1.
CLEAR wt_table2.
READ TABLE wt_table2
WITH KEY field = wt_table1-field2
BINARY SEARCH.
ENDAT.
Treatment.
ENDLOOP.
Nested LOOP with READ TABLE BINARY SEARCH
This method is useful in the case that wt_table1 is not sorted by field1. This case occurs when there are already two nested LOOPs according to the previous method and we have to focus on the annex data. By this method, we do not loop on the entire wt_table2 table but only on the desired data.
Nested LOOP FROM READ TABLE BINARY SEARCH
Use: wt_table1 is not sorted by field1.
SORT wt_table2 BY field.
LOOP AT wt_table1.
Treatment.
READ TABLE wt_table2
WITH KEY field = wt_table1-field1
BINARY SEARCH TRANSPORTING NO FIELDS.
IF sy-subrc = 0.
LOOP AT wt_table2 FROM sy-tabix.
IF wt_table2-field = wt_table1-field1.
Treatment.
ELSE.
EXIT.
ENDIF.
ENDLOOP.
ENDIF.
Treatment.
ENDLOOP.
Nested LOOP with a table of Type (only up from 4.0 Release)
This method is similar to the previous one. The wt_table2 table being defined as sorted by field1 and field2 fields, the instruction is optimized in the same way as the method. In this case, the coding is easier to read and SAP generates an error if the table is not correctly sorted.
Performance
Deletions and modifications on a table of type are good in term of performance (as a ).But, insertions are very costly because the new records must be correctly positioned so that the table remains sorted.
So it is not recommended to use this type of table. It is better to use a standard table with a statement for example.
Nested LOOP
with a sorted table
DATA: wt_table2 LIKE SORTED TABLE OF ws_struct
WITH UNIQUE KEY field1 field2
WITH HEADER LINE.
LOOP AT wt_table1.
Treatment.
LOOP AT wt_table2 WHERE field1 = wt_table1-field1.
Treatment.
ENDLOOP.
Treatment.
ENDLOOP.
It is necessary to use the instruction to keep the table sorted and not to have a duplicate key.
LOOP AT wt_table ASSIGNING (only up from 4.6 Release)
Up from the 4.6 release it is possible to use the instruction. In this case, the field symbol directly points to the line of the table. It is therefore possible to directly modify a record without copying it in the header line and update it afterwards.
Up from the 4.6 release.
DATA: BEGIN OF wt_table1 OCCURS 0,
field1 LIKE table1-field1,
field2 LIKE table2-field2,
field3 LIKE table3-field3,
field4 LIKE table4-field4,
END OF wt_table1.
FIELD-SYMBOLS: LIKE wt_table1.
LOOP AT wt_table1 ASSIGNING .
-field1 = value1.
-field2 = value2.
Treatment.
ENDLOOP.
This instruction is more costly than a classical because it requires a time to maintain the table. This solution is good to use only if lots of data must be updated in the table.
The READ instruction has a similar coding: READ TABLE wt_table ASSIGNING .
Comparative study of performance
Type of loopPerformance (1=high, 7=low)Commentary
Nested LOOP FROM index with one key field1The execution time is proportional to the number of records processed.
Use: one field composes the common key of the tables.
Nested LOOP with several key fields2Use: several fields compose the common key of the two tables.
Nested LOOP with READ TABLE BINARY SEARCH3Use: A table cannot be sorted.
LOOP AT NEW READ TABLE BINARY SEARCH4Use: no common fields to sort the tables.
Nested LOOP WHERE with SORTED TABLE5The table is defined as a sorted table.
Not recommended.
Nested LOOP WHERE6The execution time is exponential to the number of records processed.
Forbidden.
Basic Nested LOOP7Forbidden.
9.5. Form/ Perform
The use of enables to organize the program to make it more readable, easier to maintain, with appropriate names easier to understand.
The instruction permits to add on parameters during the function call. A parameter is a variable. Therefore the conversion is important. That is why, if a is used and has parameters, it is advised to standardize the different parameters as often as possible in order to avoid the conversions that are very costly in time for the SAP processor. The type is not a real type.
DATA: wv_var TYPE i.LOOP AT wt_table.
PERFORM add1 USING wv_var.
ENDLOOP.
FORM add1 USING param.
ADD 1 TO param.
Treatment.
ENDFORM.DATA: wv_var TYPE i.LOOP AT wt_table.
PERFORM add1 USING wv_var.
ENDLOOP.
FORM add1 USING param TYPE i.
ADD 1 TO param.
Treatment.
ENDFORM.
10. High-level performance
In this section, some methods will be presented. These methods can be used ONLY in EXTREME CASES, i.e. when there are big problems of performances or/and very large volumes.
10.1. Hashed tables
Principle
The access to a hashed table is only made by a unique key thanks to a hashed algorithm.
Advantage
The time required to read a hashed table is independent of the number of read records. Therefore hashed tables are very useful for voluminous tables which are often accessed for reading.
Drawbacks
Sorting a hashed table is impossible.
The key must be complete when we do a research.
The reading of a hashed table is carried out by the .
A hashed table is quick in reading; otherwise there is a not lot of uses.
It is impossible to read/ insert/ modify a record in a hashed table by using an index.
Example
DATA wt_table TYPE HASHED TABLE OF ws_struct
WITH UNIQUE KEY key
[INITIAL SIZE n] [WITH HEADER LINE]
Performances
The cost of a is the same with a standard internal table or a hashed table.
But the time for reading a hashed table is less than or equal to the time required for a standard table.
A document is available on this subject:
Hashed_tables.doc
10.2. Buffering
Buffering a table is forbidden in most of the cases. However, in some cases and when a good optimization is important, we can use it. The particular case is the following one:
transparent or pool table,
AND access to the table by reading,
AND a lot of queries to the table,
AND access by primary key (or a part of it),
AND few data updating.
More comprehensive documents are available:
general_buffering.doc single_record_buffering.doc AxxxTables_buffering.doc
10.3. Indexing
In most of the cases, the creation of an index is not recommended because we must be very careful. An index can be created when there is no more solution. Each creation must be studied individually.
The Trace SQL tool permits to check the use of an index.
10.4. PERFORM
The use of is very important for the program structure. But it is costly. Therefore, if there are big performance problems, it is advised to avoid using .
10.5. Use of the database server
In most of the cases, it is advised to use the application server (to sort a table, join two tables...). But if there is a particular big problem in one program, it is advised to use the database server.
The use of a sub-query can be good in the case of a query needs to do intermediate selections in the tables and these intermediate data are not useful for following treatments.K:\International\Accenture\20- CORE MODEL BUILD\IT Domain\9-Work\PBR\Templates\Normes\CEN-Com-IT-MET-ABAP recommending performances-V2.docPage 10 of 37
last modification: 10/3/2013
_1106134232.doc
_1106137979.unknown