RE: Deletion Of 160 Million Rows.

  • From: Kristian Myllymäki <kristian@xxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 8 Feb 2005 17:48:34 +0100 (CET)

I surely would prefer the CTAS option described in the thread, but if
going with the other option, why use a temp table at all?

Why not instead loop through some batches of deletes until no more rows
are affected by your delete condition?

begin
  loop
    delete from [table]
    where [your delete condition]
    and rownum < 10000; -- or whatever size you prefer
    --
    if SQL%ROWCOUNT = 0 then
      exit;
    end if;
    --
    commit;
  end loop;
end;
/

/Kristian

>
> I used the 2nd method recently where I had to remove 33M rows out of a
> 110M
> rows table. I first collected all rowid's of rows to be removed in a temp
> table with just
> a rowid column. Then I started deleting rows in batches of 100000 using
> this
> temp table.
> Since the 110M table was a insert-only table based on (audit)date there
> was no danger in compromising intetrity if I was going to spread out the
> delete batches over a week
> to avoid too much extra UNDO/REDO (archivelog!) generation on a single
> day.
>
> In fact, nothing new to Tom's suggestion
>
> Regards,
> Andre van Winssen
>
>
> -----Oorspronkelijk bericht-----
> Van: oracle-l-bounce@xxxxxxxxxxxxx
> [mailto:oracle-l-bounce@xxxxxxxxxxxxx]Namens Mercadante, Thomas F
> Verzonden: dinsdag 8 februari 2005 14:07
> Aan: 'sheldonquinny@xxxxxxxxx'; oracle-l@xxxxxxxxxxxxx
> Onderwerp: RE: Deletion Of 160 Million Rows.
>
>
> Sheldon,
>
> Consider redesigning your table to use partitioning.  You could then
> simply
> drop a partition of the old data that you want to get rid of, and add new
> partitions as new data needs to be loaded to the table.
>
> To delete the records, there are two approaches:
>
> 1). Create a new table and insert the records you want to keep into it.
> Drop the old table and renamed the new table back to the old name.  Tis is
> probably going to be faster than a delete process.  You could even insert
> "groups" of rows to better manage the process.
>
> 2). Write a delete process that deletes records in groups with a commit in
> between.  I would do this as follows:
>
> Create a dummy table and populate it with the PK's of the records to be
> deleted.  Write a PL/SQL loop that reads this table and deletes records
> from
> your target tables.  After every 100,000 deletes, commit.  You could make
> this restartable by having a column in the dummy table indicating that the
> delete took place.  Once all the records are deleted, drop the dummy
> table.
>
> I like the second option better than the first, but that's just me.  They
> both work fine.
>
> Good Luck.
>
> Tom
>
> -----Original Message-----
> From: Sheldon Quinny [mailto:sheldonquinny@xxxxxxxxx]
> Sent: Tuesday, February 08, 2005 4:39 AM
> To: oracle-l@xxxxxxxxxxxxx
> Subject: Deletion Of 160 Million Rows.
>
> Hi,
>
> I Would Just LIke To Ask Whether It Is Possible To Delete 60 Million
> Rows. At A Strech And Without Undo Contention. The Requirenment Is To
> Delete 60 Million Records From The 160 Million Records. Its An OLTP
> System.
>
> Is There An Way To Lessen The Effort Taken By Server Process.
> SInce Its An OLTP DB.
> Answers Should Be Related To Oracle 9x.
>
> Sheldon.
> --
> //www.freelists.org/webpage/oracle-l
> --
> //www.freelists.org/webpage/oracle-l
>
> --
> //www.freelists.org/webpage/oracle-l
>

--
//www.freelists.org/webpage/oracle-l

Other related posts: