Tom,
I have found that most users/developers/designers/architects (at least those that understand why you perform sizing processes) are
usually able to make reasonable approximations. I dealt with one client where we went through the formulas, discussions of average
column size, relationships, etc. IIRC, it took over 2 weeks to generate the first sizing report, refine it, regenerate, refine, etc.
A couple of years later, I was called back in to help them with the physical design of a new project. I dealt with the same
people, but took a different approach. I explained the new approach and we outlined 4 sizes (close to what I previously mentioned),
then started dropping the tables/indexes in the appropriate slots. About 90% of the tables/indexes were placed without much
discussion. For example, the code lookup tables (state, zip code, document type, etc) we just dropped into the small category, 20%
of the tables done in about 5 minutes. Then next 60% we talked about briefly, but dropped them into the categories based upon an
eyeball of the datatypes and a row estimation. If the row estimation was dependent upon another table, we put in the formula for
rows (rows = 2 * rows_in_other_table). If the rows_in_other_table was known (or estimated), we finished the q&d calc. If not, we
kept going and revisited it when we had completed the first pass. The remaining 20% we spent more time on, examining each column's
datatype and anticipated length, number of rows, etc. This whole process took less than 1/2 day for a couple hundred tables. When it
went into production, we found that we had overestimated the sizes (which I think is a natural for those dealing with physical designs).
My approach to managment was, "I can bill you for 2 weeks of painstakingly detailed calculations that are still guesses or I can
bill you for 2 days (including updating ddl with appropriate tablespaces) of a logical estimation. You decide." If management were
to invest that 8 days pay/billing in more disk, they probably would have enough space to handle a 50%+ underestimation. (which I
have never seen using the q&d method).
In order to have an exact result, you need to have exactness in all the inputs. Unless you have an example system, the user-inputs
are still a guess. Even when you have an example system, query dba_segments to get the total allocated space for each object. Forget
the rowcount & average row length + overhead.
Regards, Daniel
Daniel,
I agree with your methodology if your users can answer your questions. But sometimes, all the users can do is declare a record count for the database table. Or a ratio of "5 of these records for every one of those records". Then we are left with coming up with a methodology that makes management happy. I agree that the formulas that Oracle provides are a "WAG" (Wild Assed Guess) at best. But at least I can say "I am following the formulas provided by Oracle". It provides a comfort level for managers so that they feel confident that they will not be needing to purchase more disk in the near future.
And I *always* add 10% to the estimates I come up with to give *me* a comfort level!
Tom Mercadante Oracle Certified Professional
---------------------------------------------------------------- Please see the official ORACLE-L FAQ: http://www.orafaq.com ---------------------------------------------------------------- To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx put 'unsubscribe' in the subject line. -- Archives are at //www.freelists.org/archives/oracle-l/ FAQ is at //www.freelists.org/help/fom-serve/cache/1.html -----------------------------------------------------------------