RE: ** performance with delete

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <ajoshi977@xxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Sat, 14 Mar 2009 12:12:11 -0400

Partitioning and never deleting is a good solution to this problem. Whatever
validates a row for delete is used to omit that row from copy-forward to a
new partition (or a catch-all of all oldish rows which is itself
periodically winnowed.) After a partition reaches (drops to) your chosen
percentage of rows remaining, then the "not valid for delete" rows are
copied forward and you exchange the partitions. True there is some
administration not required of delete, but it is far cheaper. You might
still want exp for archival reasons or the ability to load data for analysis
somewhere else, but I'm not sure what the point of the imp would be.
Certainly the aggregate of administrative issues and possible miniscule
downtime with the partition swapping is far far less than an import. You can
beat exp/imp with a decently constructed "poor man's" partitioning with
multiple tables representing the partitions and union all with synonym
rotation.

 

Regards,

 

mwf

 

  _____  

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of A Joshi
Sent: Saturday, March 14, 2009 11:40 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: ** performance with delete

 


Hi,
   I have  a choice to do delete of rows for tables in OLTP db on 10g on
sunos on ongoing basis or do it once in six months. The % of rows deleted
for the tables varies and is upto 50% of rows for some tables. The tables do
get new rows steadily either way. Either way I plan to do exp/imp of the
table once in six months.  After exp/imp I think it will be same either way
and performance will be fine at that point. Question is how will the
performance be prior to the exp/imp for the six months or so. Anyway to
check or take action based on checks. Using method 1 : deletes on ongoing
basis. Using method 2 : do the deletes in one shot at end of six months. Is
one preferable over the other or is there criteria or 'depends'.  The tables
do have multiple indexes and used extensively. Clustering factor for the
indexes is one criteria to be considered. I am thinking index access and
index size would behave differently compared to table access. I am trying to
figure out the table access being done by application/jobs. I can also use
shrink command in between but do not know if it has bugs or any issue with
it. To be safe exp/imp or move is planned.  Thanks 

 

Other related posts: