Lothar
Yes this was around 2 hr for 1Billion row and tables has 15-23 billions rows.
There is no LOB object in any tables where UPDATE has to be processed. Let me
work to convert the one provided in test. It is around 800G Advance Compression
and without compression look like will be 3-4 Tb. But I can try to test it
Regarding Other part. It is Exadata 7 with CPU_COUNT tested with 30-40 but no
difference so far.
TxSanjay
On Thursday, August 27, 2020, 01:16:24 PM EDT, Lothar Flatz
<l.flatz@xxxxxxxxxx> wrote:
Hi,
Runtime is 1:45 hours not 20??
But 1:45 seems still too long.
- Insert is direct parallel
- Work distribution is about even
- Statement scales on CPU
So that is ok.
My gut feeling is that CPUs are not delivering.
Could be wrong of course, because I am missing many details. (E.g. how many
LOBS are in that table).
- How many cores do you have that can work on this task?
- What kind of cores (old sparcs?)
- What is your compression?
I would run the CTAS without compression. Even if you want the result
compressed, you might gain insight.
Regards
Lothar
Am 27.08.2020 um 18:45 schrieb Sanjay Mishra:
Clay
Thanks for the update. Regarding table, all work for update/CTAS is tried
only after been refreshed with EXPDP/IMPDP from production to test the
timeline. I shared the SQL Monitor report in the last email and here it is
attached again.
Sanjay On Thursday, August 27, 2020, 12:09:19 PM EDT, Clay Jackson
(cjackson) <clay.jackson@xxxxxxxxx> wrote:
#yiv0253533065 -- filtered {}#yiv0253533065 filtered {}#yiv0253533065
filtered {}#yiv0253533065 filtered {}#yiv0253533065 p.yiv0253533065MsoNormal,
#yiv0253533065 li.yiv0253533065MsoNormal, #yiv0253533065
div.yiv0253533065MsoNormal
{margin:0in;font-size:11.0pt;font-family:sans-serif;}#yiv0253533065 a:link,
#yiv0253533065 span.yiv0253533065MsoHyperlink
{color:blue;text-decoration:underline;}#yiv0253533065
span.yiv0253533065EmailStyle18
{font-family:sans-serif;color:windowtext;}#yiv0253533065
.yiv0253533065MsoChpDefault {font-size:10.0pt;}#yiv0253533065 filtered
{}#yiv0253533065 div.yiv0253533065WordSection1 {}#yiv0253533065
What Lothar said – I’d look at the plan for the CTAS to be sure the optimizer
isn’t doing something “unusual”, AND, consider the possibility that the table
is already horribly “row chained” so that each read is actually reading several
“random” (or worse) blocks. More data is clearly key to understanding.
Clay Jackson
From: oracle-l-bounce@xxxxxxxxxxxxx <oracle-l-bounce@xxxxxxxxxxxxx> On Behalf
Of Lothar Flatz
Sent: Thursday, August 27, 2020 3:41 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Re: Big Update/DML
CAUTION: This email originated from outside of the organization. Do not follow
guidance, click links, or open attachments unless you recognize the sender and
know the content is safe.
Hi,
with regards to CTAS it is very hard to believe it takes that long. I am
pretty sure that there is something wrong.
A sql monitor would be extremly helpfull.
Regards
Lothar
Am 27.08.2020 um 02:43 schrieb Sanjay Mishra (Redacted sender smishra_97 for
DMARC):
Sayan
Update statement is
Update snows.stamp_detail set set stamp_process_calc=processed_calc_amt;
Tried to use
1. Parallel DML with 100 --> Taking 20+hrs
2. CTAS was tried using half a billion as well as 1 billion rows with parallel
50, 75,100 - Almost same result
3. CTAS with nologging using same step 2 but still not much improvement
We have 5-10 such big table and so running each with this much time-frame need
high downtime
Tx
Sanjay
On Wednesday, August 26, 2020, 08:23:18 PM EDT, Sayan Malakshinov
<xt.and.r@xxxxxxxxx> wrote:
Hi Sanjay,
It would be better if you provide more details about your update. Exact update
statement would be helpful. is this column nullable/not null?
On Thu, Aug 27, 2020 at 1:12 AM Sanjay Mishra <dmarc-noreply@xxxxxxxxxxxxx>
wrote:
Andy
Yes look like is an option if we are doing work online and despite take more
time but need not require downtime. In our case multiple DDL are running to
existing environment due to Application upgrade and so all work has to be done
with downtime. So challenge is reduce time of DML operations on big tables
containing few billions rows.
Tx
Sanjay
On Wednesday, August 26, 2020, 11:20:55 AM EDT, Andy Sayer
<andysayer@xxxxxxxxx> wrote:
It does sound like a virtual column could be the ideal solution. But if data
needs to be physically stored or cannot be calculated deterministically at any
point in time then Connor has a great demo of using dbms_redefinition to create
a new table online with a function to map the new column. There’s obviously
some overhead with context switching but it may be far better than some of the
obstacles you might be facing at the moment:
https://connor-mcdonald.com/2016/11/16/performing-a-large-correlated-update/ ;
(and you might be able to help it with pragma udf in the right circumstances).
Obviously, how helpful this is depends where the work is currently going and
how online this needs to be.
Thanks,
Andrew
On Wed, 26 Aug 2020 at 16:00, Jonathan Lewis <jlewisoracle@xxxxxxxxx> wrote:
Is that 3-4 billion rows each, or total ?
I would be a little suspicious of an update which populates a new column with a
value derived from existing columns. What options might you have for declaring
a virtual column instead - which you could index if needed.
Be extremely cautious about calculating space requirements - if you're updating
every row on old data might you find that you're causing a significant fraction
of the rows in each block to migrate, and there's a peculiarity of bulk row
migration that can effectively "waste" 25% of the space in every block that
becomes the target of a migrated row.
This effects can be MUCH work when the table is compress (even for OLTP) since
the update has to decompress the row before updating and then only
"re-compresses" intermittently as the block becomes full. The CPU cost can be
horrendous and you still have the problem of migration if the addition means
the original rows can no longer fit in the block.
If it is necessary to add the column you may want to review "alter table move
online" can do in the latest versions (in case you can make it add the column
as you move) or review the options for dbms_redefinition - maybe running
several redefinitions concurrently rather than trying to do any parallel update
to any single table.
Regards
Jonathan Lewis
--
Best regards,
Sayan Malakshinov
Oracle performance tuning engineer
Oracle ACE Associate
http://orasql.org