RE: Daily partitioning

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <ganstadba@xxxxxxxxxxx>, "'Oracle L'" <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 14 May 2010 15:03:51 -0400

To me it doesn't seem like much work at all to do once to save pushing
150MB/day into undo plus the overhead for the indexes and generating the
redo that keeps the undo recoverable, etc. Even if you don't have a
performance problem it costs money to store those things.

 

In fact, if you follow the Gorman/Rahn advice for doing this it just
naturally falls into place and the queries will often consume less
horsepower. True, it is a separate question whether the savings will pay for
the partitioning option, but barring that, why not do it the obviously
better way when it is well defined and simple?

 

And I don't doubt that it runs pretty well using your method on much larger
datasets. Hardware is fast and Oracle processes well. But for just a little
set up work once, how much less expensive and faster could you be? And what
more important query or process might be faster or more responsive as a side
effect? Or what additional licenses might you avoid buying?

 

I've seen CTD (Compulsive Tuning Disorder), and this is not it. I encourage
you to consider moving to the apparently montly partitioning that would
support your information life cycle. Since you have no current performance
problem you could even cycle into it over a 13 month period. You wouldn't
start saving work as soon, but you would avoid a mass data move to get
there. Of course (see Tim's mantra) when you get down to 3 to 6
unpartitioned months or so left it will be cheaper to insert them into new
partitions and whack the monolith than to delete the oldest month.

 

Regards,

 

mwf

 

  _____  

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Michael McMullen
Sent: Friday, May 14, 2010 2:25 PM
To: 'Oracle L'
Subject: RE: Daily partitioning

 

This seems like a lot of work for 6GB of data and deleting ~150Mb/day. Is
there a performance problem currently or is this pure design? We're
primarily DSS and mostly keep data for 13 months. We have very few
partitioned tables and base deletes on < trunc(sysdate) - 13 months. A lot
of the deletes are just run batch once/week, reports are usually based on
the last few days so it all runs pretty well. Some of our tables are over
100GB and using this method.

 

  _____  

 

1.      How does data arrive? In a batch load or OLTP? 
2.      How much data are you talking about?
3.      what do typical queries look like?
4.      What's the DB version?

1. Batch load every nigh

2. The whole table may be about 6Gb

3. DSS query using date field

4. 11gR2



-- 
http://www.stefanocislaghi.eu

Other related posts: