sane number of the table partitions in DWH

  • From: "Milen Kulev" <makulev@xxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Mon, 30 Mar 2009 18:47:55 +0200

Hallo Listers,
I am on site by a pretty creative customer.
Developers want to partition a (fact) table(s) to the smallest slice possible.
The number of calculated/estimated partitions is 200 000 - 300 000 (Yep, this 
is NOT a typing error !).
My questions:
1) What is the "manageable" number of partitions (from performance point of 
view == parse times) that I you have seen by clients. 
2) Any parameters (also underscore ...) to reduce the hard-parse times (because 
of the enormous amount of partitions) ?
3) If I have so many partitions , what is the influence on the fragmentation of 
the shared pool? Any experiences ?
4) Is multi-column partitioning  a save bet (bugs) ? 
Is is running stable ? 
Are any special one-off patches on top of 10.2.0.4 needed ? 

Fact tables have sizes ~ 700 - 1000 GB.
At the moment partitioning schmeme is range(date)/hash(location_id)
NO partition exchange is used/and will not be used.
Data is relatively volatile (UPDATES, DELETES) + usual INSERTS  (some kind of 
operational DWH ). 

Application type = DWH ( ~ 4.5 TB)
Application sends SQLs with literals (static partition pruning)
Oracle Version = 10.2.0.4
OS =   Suse 9 SP4

I will be grateful for any links/ presentations , experiences,  relevant ML 
notes etc ...

Best Regards
Milen
-- 
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss 
für nur 17,95 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K11308T4569a
--
//www.freelists.org/webpage/oracle-l


Other related posts: