RE: Deletion Of 160 Million Rows.

  • From: William B Ferguson <wbfergus@xxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Wed, 9 Feb 2005 07:39:34 -0700

Okay, here's another way I thought of (please critique me if I'm wrong
anywhere). This will eliminate any problems with undo, or archives. Test
the entire process first with a sample table, to verify all the steps =
and
sequences, and so that a script can be made to perform all the actions =
as
quickly as possible.

(script 1)
1. Create empty (skeleton) table of your primary table (needed for next
part, i.e., create table xyz as (select * from primary table where =
1=3D2).
2. Create view on the skeleton table based upon what records you want to
keep.
3. Create an INSTEAD_OF trigger for the view, so any insert, update, or
delete will copy the real record to a temporary table so the action can =
be
applied later. Create the temp table as similar to above with an
additional field to specify the future action (Update, Insert, Delete).
4. Drop the skeleton table. This makes the view and trigger invalid.
5. Rename the table to whatever you called the skeleton table.
6. Rename the view to the name of your original table and recompile.
7. Modify the INSTEAD_OF trigger so it is applied to the view and
recompile.
8. ALTER TABLE to disable logging.
9. At your leisure, do whatever process you want to delete the records. =
Do
a normal delete, or have a PL/SQL program do a series of batch deletes.

(script 2)
10. Whenever everything is done, rename the view (in case you need/want =
it
later), then rename the table back to it's original name. This will
invalidate the trigger so it's process won't apply any more. Also do an
ALTER TABLE again to re-enable LOGGING.
11. Apply the changes recorded in your temporary table, and then =
truncate
it. TRUNCATE doesn't generate any redo information and is much faster =
than
a delete.=20

A lot more steps, but with a little tweaking here and there, you should
have a re-usable script in case this process comes up again in the =
future.
The script(s) should be able to execute all the steps in just a few
seconds, so you'd only have a couple seconds of downtime where the
original name is unavailable.

Am I wrong anywhere above guys and gals? My system is nowhere near the
size of this, but in my system this process would work well.

------------------------------------------------------------
Bill Ferguson
U.S. Geological Survey - Minerals Information Team
PO Box 25046, MS-750
Denver, Colorado 80225
Voice (303)236-8747 ext. 321 Fax (303)236-4208

~ Think on a grand scale, start to implement on a small scale ~




-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx =
[mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of sheldonquinny@xxxxxxxxx
Sent: Tuesday, February 08, 2005 2:39 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Deletion Of 160 Million Rows.


Hi,

I Would Just LIke To Ask Whether It Is Possible To Delete 60 Million =
Rows.
At A Strech And Without Undo Contention. The Requirenment Is To Delete =
60
Million Records From The 160 Million Records. Its An OLTP System.

Is There An Way To Lessen The Effort Taken By Server Process. SInce Its =
An
OLTP DB. Answers Should Be Related To Oracle 9x.

Sheldon.
--
//www.freelists.org/webpage/oracle-l
--
//www.freelists.org/webpage/oracle-l

Other related posts: