This update Statement should be rewritten into a merge statement. You
would not want to index GTT_TAB, There is not even a filter criteria on
that table.
Am 03.05.2021 um 09:13 schrieb Pap:
In all of those three sql monitors posted by LOK, it's clear that the major chunk(~80%) of time and resources is spent while accessing table PRODUCT_TAB (which is mostly range partitioned on column PART_DT). So I doubt the index of GTT_TAB will be of any help here. And also even the estimation is showing as 8G the actual number of rows coming out to be 17M. And this i assume it must be the error in dynamic sampling. Additionally , if you see the " gc cr grant 2-way" the #sample shows them as few seconds only out of ~1hr+ of DB time.
On Mon, May 3, 2021 at 5:22 AM Mladen Gogala <gogala.mladen@xxxxxxxxx <mailto:gogala.mladen@xxxxxxxxx>> wrote:
How about indexing GTT_TAB?
SQL Plan Monitoring Details (Plan Hash Value=1406665007)
====================================================================================================================================================================================================
| Id | Operation | Name |
Rows | Cost | Time | Start | Execs | Rows | Read |
Read | Activity | Activity Detail |
| | | | (Estim) | | Active(s) |
Active | | (Actual) | Reqs | Bytes | (%) | (#
samples) |
====================================================================================================================================================================================================
| 0 | UPDATE STATEMENT | | |
| 1235 | +2153 | 1 | 0 | | | 0.07 |
Cpu (1) |
| | | | | | |
| | | | | | reliable message
(2) |
| 1 | UPDATE |
GTT_TAB | | | 4044 | +1 |
1 | 0 | | | 10.62 | Cpu
(429) |
| 2 | TABLE ACCESS STORAGE FULL |
GTT_TAB | 8G | 381K | 4042 | +3 |
1 | 17M | 4467 | 4GB | 0.72 | Cpu
(20) |
| | | | | | |
| | | | | | cell multiblock
physical read (9) |
| 3 | FILTER | | | |
4042 | +3 | 17M | 8M | | | 0.07 | Cpu
(3) |
| 4 | PARTITION RANGE SINGLE | |
1 | 13 | 4042 | +3 | 8M | 8M | |
| 0.12 | Cpu (5) |
| 5 | TABLE ACCESS BY LOCAL INDEX ROWID |
PRODUCT_TAB | 1 | 13 | 4043 | +2 |
8M | 8M | 6M | 45GB | 55.79 | gc cr grant 2-way
(1) |
| | | | | | |
| | | | | | Cpu
(292) |
| | | | | | |
| | | | | | cell single block
physical read (1961) |
| 6 | INDEX RANGE SCAN |
PROD_TAB_IX1 | 1 | 4 | 4042 | +3 |
8M | 8M | 3M | 25GB | 32.60 | Cpu
(218) |
| | | | | | |
| | | | | | cell single block
physical read (1099) |
====================================================================================================================================================================================================
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(:B1='Y')
5 - filter("P"."PART_DT"=:B1)
6 - access("P"."TID"=:B1 AND "P"."CODE"='XX')
**************************************************
Your plan says that GTT_TAB has around 8G rows and is accessed
using full table scan. Since your update is using a correlated
query, I would try indexing the GTT. The other thing I notice is
"gc cr grant 2-way (1)"event. That means that PRODUCT_TAB is being
updated from another node. You are essentially coordinating with
the other node and that is slow. Can you perform the operation on
the same node where the table is being modified?
On 5/2/21 2:53 PM, Lok P wrote:
-- Mladen Gogala
Hello, We have a plsql procedure in which it updatesmultiple
columns of the global temporary table after loading it in
different steps. It'supdated for different filter criteria and
each of these UPDATES are running for hours even if they are
simple ones. I am trying to see if it'spossible to do it in an
efficientfashion by combining all different updates into one
UPDATE statement or any other possible method? One initial
thought was to do it as part of the INSERT itself rather than
updatingit through multiple statements at the later stage , but
then I seethe data load/INSERT is happening for multiple if/else
conditions which may not be straight enough.
Attached is the sql monitor for few of the long running UPDATES.
And i am seeing this is not really spending time for updating
rows but rather in finding the eligible rows for update i.e. the
SELECT part of it is actually taking time. So wanted to
understand how these UPDATES can be done faster?
Its version 11.2.0.4 of Oracle Exadata-X5.
Regards
Lok
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com ;<https://dbwhisperer.wordpress.com>
-- //www.freelists.org/webpage/oracle-l
<//www.freelists.org/webpage/oracle-l>