<jumping into the reorg debate quagmire>
Leslie,
I have one question, Why is such a level of detail needed?
In the past, for each schema or database, we defined 4 sizes of objects (along the lines of autoallocate boundaries).
Small - 128k (or the value of db_block_size * db_file_multiblock_read_count). This is also our baseline multiple. Medium - 1m Large - 64m Extra-Large - 256m
Regards, Daniel Fink
Applying the formulas as documented in Oracle 8, and comparing the results to an actual Oracle 8i database (a data warehouse; I'm interested only in 8 fact tables ranging in size from 10 million to 500 million rows; the 9i database is for a new client, and is the reason for this exercise) yields the following:
- Table estimates for non-partitioned tables are within 10% (uniformly lower for the estimate than the actual) of the actual - But the table estimates don't take into account partitioned tables, which are farther off - B-Tree index estimates were modified to reflect the changed size of the ROWID; most are still within 10% deviation from the actual - We're still looking at the bitmap indexes (determining cardinality) and the local partitioned indexes
More problematic is actually determining how many rows are going to be in each fact table -- both at startup and at 6 month intervals.
Unfortunately, we're not going to 10g any time soon, so I can't use the EM facility Lex mentioned.
Leslie
---------------------------------------------------------------- Please see the official ORACLE-L FAQ: http://www.orafaq.com ---------------------------------------------------------------- To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx put 'unsubscribe' in the subject line. -- Archives are at //www.freelists.org/archives/oracle-l/ FAQ is at //www.freelists.org/help/fom-serve/cache/1.html -----------------------------------------------------------------