db2 12 early experiences - db2 aktuell 2018 · db2 12 early experiences introduction db2 systems...
TRANSCRIPT
Walter Janißen
DB2 Aktuell 20.09.2017
Db2 12 Early Experiences
DB2 Aktuell, 20.09.2017 2
DB2 12 Early Experiences
Agenda
1. Introduction page 3
2. EPICS tested page 9
3. V12-Experiences page 32
4. Post GA Plans and Migration Schedule page 34
5. Conclusion page 35
6. What is still missing page 36
3ERGO Group – www.ergo-group.com
DB2 12 Early Experiences
Introduction
ERGO Group
Our clear brand positioning
and broad line-up…
Competence and all-round product range insurance – provision – advice – services
Our brand promise
„To insure is
to understand“
sets us apart
.
International Operations
in over 30 countries focusing on Europe and Asia
Part of Munich Re, one of the world‘s leading reinsurers
and risk carriers
Specialists for health, legal protection
and travel
Life and property-casualty insurance
…make us one of the major insurance
groups in Germany and Europe
€ 17 billion premium income
Excellent ratings for financial strength
€ 16 billion insurance benefits
Investments of € 127 billion
About 44,000 staff and sales agents
Direct and online insurance
As at 31 Dec 2016
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
Introduction
ITERGO
• ITERGO is the IT provider of the ERGO Insurance Group
• ITERGO combines insurance expertise with providing and maintaining
sophisticated IT solutions for the benefit of its customers
• ITERGO has very deep project experience
• ITERGO develops and provides the international IT platform for ERGO‘s
subsidiaries
Key Facts
• Founded in 2000
• Headquarter: Düsseldorf, Germany
• No. Employees: ~ 1,400
DB2 Aktuell, 20.09.2017 5
DB2 12 Early Experiences
Introduction
System Environment
Most DB2 systems are 6-way data sharing systems
> 99% of data managed by DB2 ; < 1% managed by DL/1
The main business consists of IMS transactions and batch jobs executing static SQL
Less than 1% distributed threads
DB2 Aktuell, 20.09.2017 6
DB2 12 Early Experiences
Introduction
DB2 Systems
PlexData-
Sharing
TX/sec
(peak)
TX/day
(avg.)
Users
signed on
Sandbox (3) Test 2-way - - -
Developing Developing 6-way - 200,000 ~120
Approval Developing 6-way - 150,000 ~80
Integration Developing 6-way - 40,000 ~50
Special test Developing 6-way - - -
Education Developing 4-way - 1,000 ~50
Production Production 6-way 600 - 700 16 – 17 M ~5,500
Special Production Production 1-way - - -
Data Ware House Production 5-way - - ~50
• Started the DB2 V12 ESP End of April 2016
• We had to wait for the migration of z/OS to 2.1
• Project team members
• Walter Janißen
• Ulf Brinkmeier
• Jürgen Ritterbach
• Thorsten Zurek
• Ina Fischer
• IBM Support members
• Peter Hartmann
• Christoph Theisen
• Gareth Jones
• Whitney Huang
• Thomas Beavin
• Motivated by ‘ITERGO requirements’ implemented in DB2 V12
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
Introduction
Overview
• Migrate z/OS to V2.2
• Clone Education system as our ESP-system
• Delete all rows in SYSSTATFEEDBACK
• Run the pre-migration checks
• Reorg everything which exceeded our thresholds
• Gather basic statistics for all tablespaces
• Bind all packages
• Gather the recommended statistics
• Bind again every package to get a baseline to compare with
• Rebind all plans with PROGAUTH(ENABLE)
• Dump this system
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
Introduction
Preparation
DB2 Aktuell, 20.09.20179
DB2 12 Early Experiences
EPICs tested
DDLInsert Partition
Lift Partition Limit
Enhanced Trigger Support
SQLSQL Pagination
Full Merge
Piecewise Delete
Optimizer
Adaptive Indexes
Bubble Up
Extend NPGTHRSH to default statistics
Maintenance of Profile Tables
Prune unused Columns
Utilities
REORG using Inline Stats
RUNSTATS using INVALIDATECACHE
Prune empty Partitions
Modify Recovery with DELETEDS
MiscellaniousTemporal Logical Transactions
Dynamic and static plan stability
Fast Insert and Fast Traversal
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (DDL)
Insert Partition
Several large tablespaces with hundreds of partitions (4 or 8 GB)
Rows are inserted only into a few partitions spread over the whole tablespace
REORG REBALANCE periodically required
Current situation with DB2 V11
DB2 12 Early Experiences
EPICs tested (DDL)
Insert Partition
• Insert Partition where it is required
• Run REORG
• Caution: Logical partition numbers have to be translated to physical partition
numbers
• Necessary REORGs are limited to a minimum of partitions
• You don‘t have to take care of adjacent partitions which possibly reach their
space limit too
• If you are able to determine the limit key for the new inserted partition, the
procedure of handling ‚partition full‘ conditions is very easy to automate. Just add
partition and run REORG
V12
• Tablespace must be UTS
Prerequisite
DB2 Aktuell, 20.09.2017
Advantages
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (DDL)
Lift Partition Limit
• UTS partitioned by range tablespace
• Classic partitioned tablespace can be altered and alter can still be pending
• ALTER … RELATIVE
• Online REORG will materialize all pending changes
• There is no way back to absolute numbering
• During conversion all IC-datasets are allocated at once
• Tape-drives are no longer feasible
• DASD-space can lead to problems (e.g. running out of volumes)
• TAPEUNITS are planned as an additional clause similar to the COPY-utility
• STACK YES is not supported for inline copies
• This is the show stopper for us
• But: STACK YES is possible for the COPY-utility
Prerequisite
V12
Problems
DB2 12 Early Experiences
EPICs tested (DDL)
Enhanced Triggers
• Triggers are limited in functionality
V11
• Advanced Triggers are supported
• SQL PL within the trigger body
• More than one statement within the trigger body
• Old Triggers (now called basic triggers) can still be used without modification (MODE DB2SQL)
• Consolidate different basic triggers on one object to a single advanced trigger
• Because of additional functionality the advanced triggers are more expensive
• In our tests we saw a CPU-overhead of 30 to 40 % for the same functionality
• Use basic triggers if the trigger definition is simple
• Use advanced triggers if the new features are needed
DB2 Aktuell, 20.09.2017
V12
Our Test
Conclusion
DB2 12 Early Experiences
EPICs tested (SQL)
SQL Pagination
• Application developers use static scrollable cursors just to skip to the desired row
• Problems
• Without commit application can reach the maximum number of OBIDs for a database
• Even read-only applications can create long running unit of recoveries
• Result set is always materialized
• Numeric pagination
• With OFFSET n ROWS you can specify a starting point to begin with
• Data dependent pagination
• Simple way for building the complex OR-predicates for scrolling logic
• Just code (col1, col2) >= (:H1, :H2)
• LIMIT n is a synonym for FETCH FIRST n ROWS ONLY
• OFFSET n ROWS not allowed with LIMIT n, but with FETCH FIRST n ROWS ONLY
V11
DB2 Aktuell, 20.09.2017
V12
DB2 12 Early Experiences
EPICs tested (SQL)
SQL Pagination
• OFFSET-clause is not allowed in subqueries
• FETCH FIRST n ROWS is allowed
Restriction
• Many static scroll cursors can be exchanged by SQL pagination
• Result set is no longer materialized
• Read-only applications do not create long running unit of recoveries
• Performance can be improved
DB2 Aktuell, 20.09.2017
Conclusion
DB2 12 Early Experiences
EPICs tested (SQL)
Full Merge
V11
• Full merge with a variety of new features
• Source can be a table, view or fullselect
• The MATCHED and NOT MATCHED clauses can be more complex
• Multiple DELETE/INSERT/UPDATE actions
• Limited functionality for merge
MERGE INTO TARGET A
USING SOURCE B
ON A.KEY = B.KEY
WHEN MATCHED AND A.CNT < 1000
THEN UPDATE
SET A.CNT = A.CNT + B.CNT
WHEN MATCHED AND A.CNT >= 1000
THEN DELETE
WHEN NOT MATCHED
THEN INSERT (KEY, CNT)
VALUES (B.KEY, B.CNT)
ELSE IGNORE;
Pl J Pt Stmt
Nr Nr M Tabelle AC MC Index T Nr Typ
--+--+-+--------+--+--+--------+-+--+------
1 1 0 SOURCE R 0 5 SELECT
1 2 1 TARGET I 1 IXTARGET L 5 SELECT
2 0 0 TARGET 0 5 UPDATE
3 0 0 TARGET 0 5 SELECT
4 0 0 TARGET 0 5 INSERT
5 1 0 DSNWFQB R 0 0 MERGE
DB2 Aktuell, 20.09.2017
V12
Example / Explain
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Adaptive Indexes
• Cursor with the following query-patternSELECT something
FROM T1WHERE col1 BETWEEN :H1 AND :H2 IX1 on col1
AND col2 BETWEEN :H3 AND :H4 IX2 on col2
• Access path: must use list prefetch at best would be multiple index access
• One of these BETWEEN-predicates is from LOW- to HIGH-value
Prerequisite
• Enormous savings in CPU consumption (from 400 ms to 20 ms) and elapsed time can occur
• This was the result, when e.g. H1 is equal to H2
• To support more specific cursors, where at least one of these BETWEEN-predicates is an equal-predicate, we create in general multi-column indexes
• So there is an index IX1 with col1 and col2 and an index with col2 and col1
• In this case multiple index was not considered
The use of multiple index access could be more aggressive
Adaptive Index at runtime
But
DB2 12 Early Experiences
EPICs tested (Optimizer)
Bubble up
• Suppose the following statement:
SELECT something FROM T1
WHERE T1.col1 = ?
AND T1.col3 IN (SELECT col3 FROM T2
WHERE T2.col1 = ?
AND T1.col2 = ?)
Solution with DB2 V12:
• Bad access path, because col2 is not matching
Misplaced predicate
• col2 is now copied or moved to the outer query block
• If copied: Subquery remains correlated
• If moved: Subquery becomes non-correlated
• Not moved if subquery contains a column-function e.g. MAX
Index IX1 on T1:
column COLCARDF
--------------- --------------
col1 24
col2 72,379,553
• Only worked for IN-subqueries
DB2 Aktuell, 20.09.2017
V11
V12
At start of the ESP
• Also works for complex queries (taken from production)
• SELECT DISTINCT A.VNR, A.AKZ, TRIM(LEADING FROM A.FIN) AS FIN,CASE WHEN C.V_END_DAT > CURRENT DATE THEN 'beendet'
WHEN C.V_END_DAT < CURRENT DATE THEN 'historisch‘ELSE 'aktuell‘
END AS STATUSFROM DB2.KRTB0301 AS A, DB2.VATB0990 AS C
WHERE A.GUELTIG_BIS <= CURRENT DATEAND A.GUELTIG_AB >= CURRENT DATEAND A.HIST_LNR =
(SELECT MAX(B.HIST_LNR)FROM DB2.KRTB0301 AS B
WHERE A.VNR = B.VNRAND A.VNR = C.VNRAND C.V_VERW_SYS_SL = 'KR‘AND C.GUELTIG_BIS >= CURRENT DATE - 5 YEARS)
• this is a join without a join condition (CROSS JOIN)
• three predicates are bubbled up
• Column ADDED_PRED in DSN_PREDICAT_TABLE contains ‘B’
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Bubble up
V11
V12
• No statistics are present or table is empty
• That is the general case for tables, when they are created
• Queries tend to use tablespace-scans or non-matching index-scans even if an appropriate index exists
• Access path is somehow unpredictable
• Appropriate index is chosen
• More specific: if sort can be omitted then this index is chosen otherwise the index with the most matching columns is chosen
• Result is a more robust access path
• But
• Planned: matching index access over multiple index access over tablespace-scan
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Extend NPGTHRSH to default statistics
• SELECT something FROM T1
WHERE col1 = :H1
OR col2 = :H2
• currently leads to a tablespace
scan although there are appropriate indexes
• SELECT something FROM T1
WHERE col1 = :H1
UNION
SELECT something FROM T1
WHERE col2 = :H2• Matching index access is chosen
Prerequisite
V12
• It does not work as intended
• Index chosen can still be not predictable
• Example
• SELECT … FROM T1
WHERE C1 = ?
AND C2 = ?
ORDER BY C3
• There are two indexes IX1: (C1, C2, C3) and IX2 (C1, C2, C4)
• DB2 can still pick up index IX2
• Reason: IX1 and IX2 are in different “classes” and classes compete
against each other on a cost basis independently of number of
matching columns
• RFE 109688 is raised to get the intended behaviour
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Extend NPGTHRSH to default statistics
But (the reality)
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Maintenance of Profile Tables
• ZPARMS:
• First recommendation creates profile from existing stats
• Further recommendations update profile
• Delete stale statistics, maybe 6 months older than STATSTIME in SYSTABLES
• If there are special stats create a profile for those tables
• INLINE STATS can then be used in your REORG jobs immediately after FL 500
• STATFDBK_PROFILE = YES
• STATFDBK_SCOPE not NONE
• Rebind of 10,000 packages on 4,000 tables led to only 800 rows in SYSTABLES_PROFILES
• Reasons BASIC TYPE I and T, STALE and CONFLICT do not update the profile
• After migration we were flooded in less than 1 week with 120,000 stale-reasons
• Reorg of a partition with production data took more than 16 minutes (220 different recommendations) compared to 1 minute with default statistics
Prerequisite
V12
Migration (Suggestion)
Findings
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Optimizer)
Maintenance of Profile Tables
• There is no way that these profiles contain statements to collect the least values
• Generated RUNSTATS do not contain COUNT at all
• There is no way to avoid storing recommendations for queries, which do not make sense
• A threshold would be beneficial
• Collect only statistics, which can affect the access path choice
• Sometimes recommendations do not make sense (bug?)
• How to decide, if collection of certain values is no longer necessary
• Do we need information, which stats the optimizer hasn’t used?
Solution with DB2 V12:Problems
Findings
• If a column is dropped the materializing REORG will automatically update the
profile
• But it does not remove the profile, if the last column is dropped, where specific
RUNSTATS-statements are provided for
• Although there is no more than the default
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Utilities)
REORG using RUNSTATS-Profiles
• RUNSTATS-Profiles in SYSIBM.SYSTABLES_PROFILES have to be created
manually using information from SYSIBM.SYSSTATFEEDBACK table
• Execution of RUNSTATS-Utility is required
• RUNSTATS-Profiles will be created automatically in
SYSIBM.SYSSTATFEEDBACK
• Inline Stats using a profile can be used in REORG-Utility
• Statistics which do not make sense are eliminated from DB2-catalog
• Easy to implement
• ZPARMs: STATFDBK_PROFILE=YES, STATFDBK_SCOPE not NONE
• Add USE PROFILE parameter to REORG-Jobs
• Suitable (?) statistics will be created
• Reorg of a partition with production data took more than 16 minutes (220 different
recommendations) compared to 1 minute with default statistics
V11
V12
Findings
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Utilities)
RUNSTATS
• RUNSTATS always flushes the cache
• UPDATE NONE REPORT NO does nothing else
• New keyword INVALIDATECACHE
• It is not a synonym for UPDATE NONE REPORT NO
• RUNSTATS TABLESPACE database.tablespace INVALIDATECACHE YES
• Stats are collected for the tablepace
• Cache is flushed
• RUNSTATS without INVALIDATECACHE YES does not flush the cache
• That is different compared to V11
• RUNSTATS TABLESPACE … UPDATE NONE REPORT YES
• Still flushes the cache, although INVALIDATECACHE YES is not
specified
• INVALIDATECACHE YES is default here
• RUNSTATS … RESET ACCESSPATH does not remove the RUNSTATS-profile
V12
V11
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Temporal Support)
Temporal Logical Transactions
• Multiple working processes are cumulated to a single history table entry
• Number of business working processes (in history table) depends on commit-
frequency
• At ITERGO commit-frequency for batch jobs should only be adjusted in order to
deal with locking conflicts during online time and performance while batch is
running
Customer
ID
1
2
3
3
2
2
1
.
Value
+500
+100
+300
+100
+400
+200
+700
.
Batch
Job
update
Commit-
frequency
= 5
Customer
ID
2
3
1
.
.
.
Balance
700
400
1.200
.
.
.
Value
+200
+100
+700
.
.
.
operational data
commit
5 business working processes cumulated into 3 history table entries
working process table
Customer
ID
1
2
3
2
1
.
.
Balance
0
0
0
500
500
.
Value
0
0
0
400
500
.
historical data
V11
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Temporal Support)
Temporal Logical Transactions
Customer
ID
1
2
3
3
2
2
1
.
Value
+500
+100
+300
+100
+400
+200
+700
.
Batch
Job
update
SET TTLT
&
Commit-
frequency
= 5
COMMIT & SET TLTT
all business working processes are documented
working process table
SET TLTT
• Independent from commit frequency every single business process is
documented and comprehensible
Customer
ID
2
3
1
.
.
.
Balance
700
400
1.200
.
.
.
Value
+200
+100
+700
.
.
.
operational data
Customer
ID
1
2
3
3
2
2
1
.
Balance
0
0
0
300
100
500
500
.
Value
0
0
0
+300
+100
+400
+500
.
historical data
SET TLTT
SET TLTT
SET TLTT
SET TLTT
SET TLTT
SET TLT
SET TLT -> SET TEMPORAL_LOGICAL_TRANSACTIONS = 1;
SET TLTT -> SET TEMPORAL_LOGICAL_TRANSACTION_TIME = CURRENT TIMESTAMP;
V12
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Temporal Support)
Temporal Logical Transactions
• TEMPORAL_LOGICAL_TRANSACTIONS (TLT) must be set to 1
• Can mitigate the problem of out of sync of history rows (SQLCODE - 20528)
Solution with DB2 V12:
• TEMPORAL_LOGICAL_TRANSACTION_TIME can be set without setting TLT
Logical unit of work < physical unit of work
• SET TEMPORAL_LOGICAL_TRANSACTIONS (TLT) to NULL
• Updates of several temporal tables with the same SYSTEM_END_TIME is
possible
• A subroutine can set TEMPORAL_LOGICAL_TRANSACTION_TIME
• Subsequent subroutines will work always with the same SYSTEN_END_TIME
• Does not change after commit
Logical unit of work > physical unit of work
Return back to default behaviour
Use case
Problem
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Miscellaneous)
Dynamic Plan Stability
• -STA DYNQUERYCAPTURE STBLGRP(SPECIFIC) STMTID(94958)
• Works as expected
• Prerequisite: ICFID 318 has to be active
• -STA DYNQUERYCAPTURE STBLGRP(SPECIFIC) THRESHOLD n
• MONITOR(NO)
• Only statements, which exceeded this threshold are stabilized
• MONITOR(YES)
• Every statement is stabilized, which exceeds that threshold as long as
this command is active
• But: If statement is flushed from DSC, counter is reset
• Statements must have the same AUTHID
• Therefore: not well suited for QMF- or SAS-queries
• ZPARM CACHEDYN_STABILIZATION=BOTH
Stabilize a specific statement
Prerequisite
Stabilize queries with more than a certain amount of executions
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Miscellaneous)
Dynamic Plan Stability
• To get the access path
• EXPLAIN STABILIZED DYNAMIC QUERY STMTID n
• n = STMTID in SYSDYNQRY = PER_STMT_ID in PLAN_TABLE
• QUERYNO is not populated for all those explains
• Several executions provides exactly the same rows
• Could be confusing when analyzing the access path information
• But EXPLAIN STMTCACHE STMTID n
• QUERYNO is populated
• Stabilized Query consumed more CPU than static
• My measurement: about double as much as the same static statement
Findings
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Miscellaneous)
Static Plan Stability
• Free a copy of a package
• Works fine, even if package is in use
• Message is confusing, no hint that PLANMGMTSCOPE was used
FREE
Rebind
• Switch to a previous copy, which is invalid
• Does not work, PMR 79316
• APREUSESOURCE
• Only possible with APREUSE(WARN/ERROR)
• Version, which was used to compare with, is stored in APREUSE_VERSION
in DSN_STATEMNT_TABLE
• Column is also populated for BIND with APREUSE
Other small enhancements
• New column ORIGIN in SYSPACKAGE tells you, what kind of bind it was
• E.g. ‘A’ for automatic rebind
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
EPICs tested (Miscellaneous)
Performance-Topics
• Partitioned tablespace with MEMBER CLUSTER and INLINE LOB
• All LOBs were inline
• 4,5 million rows inserted
• Fast insert did not work
• DSNI055I with reason 8 occured , PMR 79838 (recently closed)
• Suggestion
• Maintain a counter in RTS how often fast insert was used
Fast Insert
Fast Traversal
• 1, 000,000 selects using unclustered index
• After several tests index memory usage occurred
• Up to 23% CPU-time reduction compared to no FTB usage
• 100,000 inserts
• Index memory usage occured
• Up to 20% CPU-time reduction compared to V11
• Suggestion
• Maintain a counter in RTS how often FTB was built or used
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
V12 Experiences
• 23 PMRs opened, 22 solved, 1 still open
• We got SQLCODE -109 for some packages
• Reason: non-documented use of SELECT … INTO … UNION ALL …
• Currently rebind works fine
• A new deprecated ZPARM DISALLOW_SEL_INTO_UNION will be introduced
• Optimizer generated recommendations for columns, which are not supported by
RUNSTATS (column too large)
• RUNSTATS was enhanced to ignore these recommendations
• Recommendations still issued by optimizer
• DSNI055I-message for fast insert, first reason 10, then reason 8
• FTB-blocks built for objects, which were excluded by SYSINDEXCONTROL
• Lock-Escalation message is enhanced
• Now it contains also the partition number
Other experiences
Some problems during testing
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
V12 Experiences
• No problems during migration
• No problems during first tests
• Problems came up when application developers start working on Monday
• Some 0C4 and 0D6 in master address space
• A lot of inconsistent data with different reason codes
• Broken pages
• IMS applications hung
• DB2 crashed several times
• System could be stabilized on Friday afternoon
• 2 PTFs were installed
• All packages which activated a trigger were rebound
• Every trigger package was rebound
• Each DBD containing a clone table was repaired
• FTB was disabled
• Connectivity problems between MS ACCESS and DB2
• This prevented migrating the next system
Big problems after migration of our development system
35
DB2 12 Early Experiences
Post GA Plans and Migration Schedule
November 2016
V12R1M100
Development
system
March 2017 April 2017 May 2017 June 2017
V12R1M100
Approval
V12R1M100
Integration
V12R1M100
Production
V12R1M100
Education and
Data Warehouse
DB2 Aktuell, 20.09.2017
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
Conclusion
+ Migration effort significantly reduced compared with previous versions
+ Several ITERGO-requirements were implemented
+ Very good support from our IBM-team
+ CC: Question for DB2 Development
• Bubble Up was extended for more predicates
• NPGTHRSH will be extended for more cases (RFE 109688)
• Adaptive indexes will also support multi-column-indexes
Lift partition limit cannot be used
STACK YES is mandatory for us for inline copies
Logical partitions could be better supported
What we could achieve
Pros
Cons
DB2 Aktuell, 20.09.2017
DB2 12 Early Experiences
What is still missing
• Performance is awful for the following query
SELECT …
FROM T1
WHERE char-col = integer-value
• Predicate is stage2
• Users just forgot to put quotes around value
• RFEs 56980, 88964 and 88966
• RFE 33002
• CREATE TABLE with LIKE-clause or with fullselect
• Currently: CURRENT SQLID must have select privilege for table, which is used
as a template
• Why is a privilege needed at all?
• RFE 24234
Cross Load SHRLEVEL CHANGE
Implicit Casting
Privileges