That says so me: do the load population of the GTT_0, and then convert each
update as an insert into GTT_n, where n increases from 1 to m for each of the
updates you have. If space is a problem, you can drop GTT_n-1 after each insert
completes.
I believe that is essentially the synthesis of what JL suggested with a
trivial, one statement at a time version of the “scaling to infinity” method
documented and by popular by Tim Gorman. The results *should be* full set
operations at direct (aka append) speed. By cascading through GTTs you dispense
with a ton of recoverability overhead, and then you can preserve the final
result also inserting into the final destination.
Depending on size, you might want to produce a GTT_Product_tab that includes
just the rows that GTT_0 says you might require so you don’t have to paw
through Product_tab n times.
I guess that depends mostly on whether it is easier to debug your existing
updates into correct copy/inserts or a one time build the full insert row
select. Based on JL seeing the apparent bug in the updates, my guess would be a
series of cascading inserts would be less time to debug for the team building
this and it shouldn’t be significantly slower. Besides, it’s not clear that
P.code=’AA’ even can be done in one step.
Good luck.
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Lok P
Sent: Friday, May 14, 2021 6:11 AM
To: Mark W. Farnham
Cc: Jonathan Lewis; Oracle L
Subject: Re: Tuning Update query
Thank you very much for the guidance here.
While going through the code , I noticed that none of the UPDATED columns was
getting populated as part of the INSERT query which is populating data in the
GTT inside the code. Those are getting populated for the first time through the
UPDATE statements i.e. post load. Which means I think, we can outer join those
tables(like PRODUCT_TAB) directly in the INSERT query itself and populate the
columns rather than doing it after data load.
But then I see there exist one to many relationship between the sources table
JOIN(which populates the base data into GTT ) and the PRODUCT_TAB, so it means
the outer join is going to break things because the overall number of results
set will increase which may not be correct. Or else we may have to use a
DISTINCT clause to get those corrected.
Regards
Lok
On Tue, May 4, 2021 at 5:03 PM Mark W. Farnham <mwf@xxxxxxxx> wrote:
Do you have handy the script or load used to create GTT_TAB?
It seems passing strange that you want to debug what is essentially a row by
row case computation instead of creating the disjoint sets of rows to operate
on as a bulk set.
Consider GTT_TAB_XX_Y alias XX_Y, which contains only TMP.CIND=’Y’ and
TMP.PART_DT for which the corresponding TMP.PART_DT = P.PART_DT has P.CODE=’XX’.
Then insert into GTT_TAB
Select p.<the column list from P>, xx_y.<the column list from XX_Y> where
p.tid=xx_y.tid and p.part_dt=xx_y.part_id
All the rows from xx_y are used and should be a full table scan, matching
against the presumably indexed p.tid, p.partid from P.
Do this for each of your separate “updates”, appending into GTT_TAB, and
directly append the disjoint bits that require no update directly into GTT_TAB
however you do now. The only tricky set is P.CODE=’AA’ for which you have two
updates. For that, use GTT_TAB_AA_5 gets appended into GTT_TAB_AA_6 and
GTT_TAB_AA_6 gets appended into GTT_TAB.
All the other rows (which meet zero of your update predicates) you dump
directly into GTT_TAB from wherever you are currently plucking them.
This of course fixes your ‘Y’ problem and operates set wise using bulk inserts
AND operates only on rows that need attention instead of filtering out the rows
you’re not interested in for each update.
Write out the differences as a dataflow diagram to understand why this always
wins.
mwf
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Jonathan Lewis
Sent: Tuesday, May 04, 2021 5:07 AM
To: Lok P
Cc: Oracle L
Subject: Re: Tuning Update query
a) I wasn't suggesting that you use an updatable join view, but I can see how
you could misinterpret what I said. The idea was to see if you could write a
single subquery that joined several of the source tables (such as PRODUCT_TAB)
and then still carry out that full tablescan of the GTT but using that join as
your subquery update rather than having "one tablescan and update = one
subquery".
I see you're already doing something of this kind using the MERGE command; if
you're getting an ORA-01779 when you try to rewrite one of your updates as an
update through an updatable join view this means that equivalent code to do a
MERGE could produce a run-time error ORA-30926). The update portion of a merge
and an updatable join view both require the same uniqueness but the view
enforces the logical requirement at compile time while the merge command allows
you to get away with the update so long as you get lucky with the data.
I note that you still haven't moved the "tmp.cind = 'Y'" predicate to the
correct position - that should have been the zeroth step before worrying about
anything else. I assume the one update I picked out isn't the only one where
this error occurs.
b) Correct regarding multiple GTTs. Each "insert /*+ append */" would probably
require it's own target GTT. Remember, though, that I haven't examined your
original posting in detail, and you probably haven't told us everything a
consultant would ask about so if your current code has some parts that update
disjoint subsets of the data you might find parts of the rewrite where you
could do multiple inserts into the same GTT.
One thing to bear in mind - a possible bar to adopting this approach - is that
you would have to commit after insert otherwise the next insert, or the next
query against the target GTT would raise the (unexpectedly parallel) error:
ORA-12838: cannot read/modify an object after modifying it in parallel
Regards
Jonathan Lewis
On Mon, 3 May 2021 at 12:58, Lok P <loknath.73@xxxxxxxxx> wrote:
Thank You Jonathan and Lothar. I was trying to modify one of the sample
updates(as below) as per your suggested method.
UPDATE GTT_TAB TMP
SET (c1, c2, c3, c4,c5.. c11) =
( (SELECT col1,col2,..col11
FROM PRODUCT_TAB P
WHERE P.TID = TMP.TID
AND TMP.CIND = 'Y'
AND P.PART_DT = TMP.PART_DT
AND P.CODE = 'XX'))
Jonathan, let me know if my understanding is correct on the suggested points
" a) See if you can minimise the number of update ste[ps by updating from a
JOIN of several source tables"
So if I get the above point correct then I was trying to modify the UPDATE as
below , but I am getting ORA-01779 while running to see the plan. So does it
mean that the GTT has to have a unique key present in it to have this method
work?
UPDATE (SELECT P.COL1...P.col11,TMP.C1.. TMP.C11
FROM PRODUCT_TAB P,GTT_TAB TMP
WHERE P.TID = TMP.TID
AND TMP.CIND = 'Y'
AND P.PART_DT = TMP.PART_DT
AND P.CODE = 'XX'
) SET C1=COL1, ... C11=COL11;
ERROR at line 10:
ORA-01779: cannot modify a column which maps to a non key-preserved table
Regarding below point ,
"b) Rewrite the code to step through a series of GTT doing
insert /*+ append */ into next_gtt select from previous_gtt join
{source tables}"
Do you mean i have to replace the UPDATE with INSERT /*+APPEND*/... queries but
as it cant be done in the same GTT , so i have to create multiple GTT's for
each UPDATES, so as to replace them with INSERT APPEND queries?