Re: Large (highest ?) Number of Partitions / SubPartitions in *Production*

  • From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • To: hkchital@xxxxxxxxxxxxxx
  • Date: Sun, 15 Feb 2009 18:14:13 -0800

On Sat, Feb 14, 2009 at 10:58 PM, Hemant K Chitale
<hkchital@xxxxxxxxxxxxxx> wrote:
> Have you implemented / seen implemented, say 64thousand Partitions for a
> table (and this being done for possibly half-a-dozen tables), with at leaste
> one corresponding LOCALly Partitioned Index ?
>
> What are the implications on :
> 1. MetaData in the Shared Pool and Latching.
> 2. Gather_Stats (done GLOBAL or PARTITION wise).
> 3. Queries that, very unfortunately, end up scanning all the partitions,
> many of which are small 64K partitions only.


First I have to ask, what problem are you attempting to solve by using
64 thousand partitions and what are the other ways you have thought
of?

1. One of the projects I was evolved with had 3 tables (IIRC) each
with 90 one day partitions each with 1024 hash subpartitions (90 x
1024 = 92,160), but I would not recommend that unless you know
*exactly* why it is required (to put the size in perspective, the temp
tablespace was 32 terabytes).  There is space required in the buffer
cache for all the segment header information unless you want to wait
for it to be read each time.  The more segments you have, the more
space needed.

2. I probably would not attempt using so many partitions unless you
were on 11g and use incremental partition stats.

3. If the partition size is only 64k as in 64,000 bytes, it is going
to be relatively inefficient if you are using parallel query.  64k is
very small for the granule size.

-- 
Regards,
Greg Rahn
http://structureddata.org
--
//www.freelists.org/webpage/oracle-l


Other related posts: