Re: Deletion Of 160 Million Rows.

Hi Sheldon,

> The Table Is Online And Updations Are Happening,
Then if no donwtime delete is the only option here.
And at the end I am not seeing any time constraint to
finish the work.

Other suggested options were partitioning and CTAS.
Partitioning is too complex and means redesign and
rethinking on all SQL's based on that table. Long
downtime that you do not have.
CTAS is very good option but still more complex if no
time pressure. Can be used if you have some downtime
and enough space.
 
> But Out Of That We Just Trying To Get Rid Of 60
> Milliion Records. Obvious Index Are There.
> I Just Want An General Answer As To What Option I
> Should Go For From The Solution U GuysHave Mention.
Then there are two very good options:
 1. delete of some number N rows in the loop,
commiting each delete set
 2. doing the same but with bulk deletes as somebody
suggested

1 is easier. The 2 can be good if you pick less blocks
per set then normal delete. Bulk operations are very
good when rows deleted are in less blocks then doing
the same number of same deletes.
One big delete is saving the overhead of executing
many SQL's and possible networking (even with PL/SQL
you have networking that is called calling SQL engine
:).

> All I Need Is The Performace Should Not Be Degraded
> While The System Is Online State.
> The Work Has To Eventually Be Done, At Peek Hours Or
> An WeekEnds.. But The Db Is 24x7 (Table Online).
Then do this when no pressure on the system as you
said. You can always speed it up or slow it down with
some simple algorithm around the main code. 

Now it is all up to you.
You also have very good test set (60M) to test each of
these strategies (on first 1M for example), then do it
on rest of 59M :)

Regards,
Zoran

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
--
http://www.freelists.org/webpage/oracle-l

Other related posts: