Re: 10g System statistics - single and multi

  • From: Martic Zoran <zoran_martic@xxxxxxxxx>
  • To: breitliw@xxxxxxxxxxxxx
  • Date: Wed, 25 May 2005 01:51:22 -0700 (PDT)

Hi Wolfgang,

While I agree with you that the cardinality and how
some object is cached are the most influencial I think
that IO cost is very influencial too.
Without having the proper mreadtm, sreadtm and mbrc
the optimizer will make this cost so different.

I think that the mbrc on the segment level is not
going to harm the parser because of retrieving object
statistics data anyway (#NBLKS, ...). Of course it is
going to cause a problem to track MBRC on the lower
kernel level (hard to me to say how complex is that
for Oracle to implement). Of course if no
object/segment level MBRC is available then system
MBRC can be used.

SREADTM and MREADTM if too complex for segment level
may be collected on the tablespace/datafile levels
(that is already partially done).

For example on one huge customer system I got MBRC
between 21 and 128 (the maximum with block size 8k on
that system). 
sreadtm was between 6 and 11ms
mreadtm was between 30ms and 60ms. Of course 30ms was
for MBRC 21, so I should scale this number to 180ms,
that is 3 times the mreadtm for 128 MBRC.

So we have this difference for the "system averages":

MBRC: 6 times difference
MREADTIM: 3 times difference for the same MBRC
SREADTIM: almost 2 times difference

This difference may be even bigger between
segments/tbs/datafiles.

Of course somebody will say this is the difference
when you have crazy load on the system vs not loaded
system. This is true. But this thing is happening
inside these loads on the segment or tablespace level
too. Please, do not tell me that all
objects/tablespaces are of the same importance and
that you have all I/O the same across the board and
objects.

A lot of wrong costs from the optimizer are made while
choosing FTS instead of NL or vice versa when the IO
cost is so wrong.

Or am I wrong?

For me IO cost is very important and integral piece of
the cost optimizer together with knowing how much IO
will be needed.
So both the number of IO's and how fast IO's are the
important.

Wolfgang, in my head is maybe a little bit confusion
about the intermingled dependencies between IO costs
and CPU costs especially because the difference in the
plan will lead different IO cost, but IO cost may lead
to the different plan too.
Cannot judge at the moment SQL Profile influence on
the whole thing.
It looks that I am lost now :)

Regards,
Zoran

> I disagree with gathering sreadtm, mreadtm, and mbrc
> by segment. That 
> would generate too much data which the CBO has to
> process at parse time 
> for too little gain. There are other areas in the
> CBO which have the 
> potential for much greater payback or less overhead.
> Like getting a grip 
> on the caching efficiency of different segments or
> dynamically 
> correcting stark cardinality errors a-la profiles.
> 
> -- 
> Regards
> 
> Wolfgang Breitling
> Centrex Consulting Corporation
> www.centrexcc.com 


                
__________________________________ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new Resources site
http://smallbusiness.yahoo.com/resources/
--
//www.freelists.org/webpage/oracle-l

Other related posts: