Sheehan, Jeremy wrote,on my timestamp of 23/11/2010 5:30 AM:
Right now it's using a simple statement for the deletes. NO loops or anything fancy. Delete from table where cob_date> date It's a HUGE transaction as it's deleting about a week's worth of data at a
time. We normally don't have any problems with Undo, just the archivelog backups. They fill up the backup directory real quick.
We asked to have the indexes partitioned, but the developers have been using
hints that reference the indexes needed so the transactions will go from hours to minutes. The plan is to eventually get everything onto local indexes so we can avoid creating so much undo/redo and just truncate the partitions and let that be that.
I was planning on running some additional tests with this, but I wanted to see
if anyone had any experience with the following. That's all.
ALTER TABLE sales TRUNCATE PARTITION dec98 UPDATE GLOBAL INDEXES;
Not with that particular flavour. We've converted a lot of our large tables to partitioned, to help with a similar archival problem.
But in the process we replaced all potential global indexes with local pre-fixed ones. And it turned out we actually dropped a lot of indexes in the process: it was faster to let the optimizer prune all not-needed partitions and do full scans on the ones left over than to carry on with the index overhead - space and performance-wise.
Some changes to the SQL and its hints were needed but given in some cases the performance improved by up to 10 times, it was judged a worthwhile exercise.
Not saying that is a yardstick for everyone, just encouraging you to talk to the developers and get everything over to local indexes ASAP.
Until then, you may well have to live with the redo overhead. Which will be smaller as only index entries are being counted for redo and that hopefully won't incur as much overhead.
-- Cheers Nuno Souto in sunny Sydney, Australia dbvision@xxxxxxxxxxxx -- http://www.freelists.org/webpage/oracle-l