Re: advice for massive delete querry

  • From: bobmetelsky <bobmetelsky@xxxxxxxxxxx>
  • Date: Wed, 31 Aug 2005 23:29:01 -0400

Yes, that sounds good, and  thanks Mark for the detailed message.

? What is PCTAS ? "P"   create table as ..

Cant I just rename the scratch table and recreate the indexs.? That woudl save time of doing a direct path load.

 I dont think there is much as far  as dependent objects.

As Mark  mentioned this is a good candidate for  future partitioning
The instance is in NOARCHIVELOG, 50GB temp space

this should create aprox 150M rows, any more effective syntax than the following?

create table  N_target
 NOLOGGING
as select * from source
where 4_num <  20040101
AND  4_num > 20041231;

drop old and rename;

Thnaks for the suggestions
bob


"Oracle error messages being what they are, do not highlight the correct cause of fault, but will identify some other error located close to where the real fault lies."


John Clarke wrote:
I'm with Mark - create a scratch table (nologging of course, PCTAS maybe)
with all the data you want to keep, truncate the original table, and
direct-path insert rows from the scratch table back into the original.  Drop
the scratch table when done.

Depending on all the stuff associated with the table (indices, triggers,
grants, etc), the truncate/re-insert thing works well b/c you don't have to
remember much of anything, except maybe drop and rebuild indexes when done.
Of course, this approach will require space that you may not have, as well
as downtime you may not have.

Just an idea ...

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Mark W. Farnham
Sent: Wednesday, August 31, 2005 9:47 PM
To: bobmetelsky@xxxxxxxxxxx; oracle-l
Subject: RE: advice for massive delete querry


Is it possible to copy the keepers instead and then drop the original table
and do the requisite renames?

Delete has to copy the entire row to rollback. Copying the keepers in a
particular order *may* have an extreme benefit if there is a dominant order
of access to the table. The result set has no air left behind, which is good
for relatively static data, less good for high rate of collisions in block
updates.

If (as it appears from your delete values) this is essentially unhooking a
years' worth of data, then think seriously about partitioning next time
around. A year at a time is usually thought to be a pretty big chunk, your
mileage may vary.

Or even use "poor man's partitioning." If your granularity is a year, then
the inserts go into a table tabname2005 currently, and next year you'll
switch the insert synonym to point at tabname2006. Then your select and
update view is tabnameu and your insert view is tabnamei (or whatever
nameing convention you want). When you unhook a year, you redefine the
synonyms and views at a quiescent moment. (That's the way dinosaurs handled
it before there was partitioning.)

On the con side of course, you'll transiently need more space, and you'll
need to build any indexes from scratch at full sort overhead, excepting
possibly one index if you copy the keepers in the requisite order.

Usually when you add up the overheads (test a reasonable sized sample, but
remember that indexes cost about n log n so you'll get an underestimate for
a sample). If you have lead time you can create full size indexes on a test
database copy to more accurately size the effort.

Thumbrule: If you're keeping less than half it's not even close - copy the
keepers. (oh I hated to type in a thumbrule. there are always exceptions,
like if you have "number of columns in table factorial" indexes.)

Regards,

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx]On Behalf Of bobmetelsky
Sent: Wednesday, August 31, 2005 6:13 PM
To: oracle-l
Subject: advice for massive delete querry


Guys I have this statement from a developer

DELETE  target
WHERE (1_num,2_num,3_num,4_num)
IN
(SELECT 1_num,2_num,3_num,4_num)
FROM source s
WHERE s.4_num BETWEEN 20040101 AND 20041231;

The statement is excepted to delete around 150M rows, the sub query
returns  about 300M rows. !!!
all datatypes are number(n)

I'm not a performance guru but I suggested to use the commit_every(n)
package  - from Steve Adams site, and CTAS
e.g.

begin
commit_every(100);
create table t as select * from source where 4_num <  20040101 AND  4_num >
20041231;
end;

What are the options for a massive delete statement such as this?
  I've googled and poked around orafaq but didn't come up with much.

  I recall the helpful mindset  from  Wolfgang Brietling (sp) as "the
quickest way to do something is often not to do it", so Im thinking  ctas
with commit_every()  as a viable alternative to the delete.

Ideas from the more experienced?

thanks!

  bob
--
"Oracle error messages being what they are, do not
highlight the correct cause of fault, but will identify
some other error located close to where the real fault lies."
--
//www.freelists.org/webpage/oracle-l


-- //www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l

Other related posts: