RE: Deletion Of 160 Million Rows.

  • From: david wendelken <davewendelken@xxxxxxxxxxxxx>
  • To: ryan_gaffuri@xxxxxxxxxxx, thomas.mercadante@xxxxxxxxxxxxxxxxx, "'kristian@xxxxxxxx'" <kristian@xxxxxxxx>, oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 8 Feb 2005 13:48:30 -0800 (PST)

Aw, c'mon Ryan, tell us how you really feel about it.

So, if I add row-level delete triggers, can I have a block AND a row-based 

-----Original Message-----
From: ryan_gaffuri@xxxxxxxxxxx
Sent: Feb 8, 2005 1:35 PM
To: thomas.mercadante@xxxxxxxxxxxxxxxxx, 
        "'kristian@xxxxxxxx'" <kristian@xxxxxxxx>, oracle-l@xxxxxxxxxxxxx
Cc: "Mercadante, Thomas F" <thomas.mercadante@xxxxxxxxxxxxxxxxx>
Subject: RE: Deletion Of 160 Million Rows.

I will say this one more time and speaking from experience.... I have worked 
with large databases in the multi-multi-terabyte range. 
The other ideas here are wrong. do not follow them. 

1. The number of records involved is FAR less important than the volume of 
data. No matter what shop I go to, it takes the longest time to get this 
concept across to people. Be they DBAs or whatever. When doing bulk processing 
you are reading the whole table. It is a 'block' algorithm, not a row 


Other related posts: