Re: 10g System statistics - single and multi

  • From: Wolfgang Breitling <breitliw@xxxxxxxxxxxxx>
  • To: Christo Kutrovsky <kutrovsky.oracle@xxxxxxxxx>
  • Date: Tue, 24 May 2005 10:33:58 -0600


Christo Kutrovsky wrote:

> Woflfgang,
> 
> It was long weekend here, holidays on Monday. Sorry for delay.

Same here. I'm just 2 hours (earth rotational delay) west of you, but 
same country.

>>I have to disagree again. I just did a mathematical transformation of the
>>cost formula as documented in the Performance Tuning Guid
> 
> 
> I was refering to the timing differences, not formula wise. Given that
> FTS will have mostly sequencial multi-reads, while range scans will
> have random single read, the reduction factor will be off by a
> significant factor.
> I tend to think about "cost" as time measured in "sreadtim" units.
> 
We are talking averages here. Assuming that systems statistics are 
gathered over a representative workload, the sreadtm, mreadtm, and mbrc 
readings represent system wide averages for what actually happened 
during that time - with all caveats regarding averages. Therefore, 
filling in those values into the cost formula gives the best 
approximation of the elapsed time for the sql. There are many more areas 
where this estimate can be wrong than the small differences in different 
reads -  beginning with assumptions by the cbo regarding the data 
distribution and the selectivity of the predicates to the overestimation 
of the clustering factor to the effect of caching in the buffer pool.

> "oltp" test:
> Your timings seem to be relativelly low (i.e. too fast). This would be
> due to disk locality. In my OS side testing, i've done a test with
> random IO through the whole disk drive, and random IO through 1/3 of
> the drive. The difference is about 3 times in IO capacity (forgot to
> get responce time results, but I assume same difference). Thus I think
> if you were to do IO over the whole array you avg times for single
> would the very least double, which would make the difference
> significant. I am not sure how you could test this easyly on Oracle
> side. One way I can think off, if your file system on the array in
> question is relativelly empty, is to create a tablespace with multiple
> large files until it fills up the entire file system. Oracle will then
> use it's round-robin extent allocation and the data will be somewhat
> spread over the whole array. I am putting this test on my "todo" list
> too.

My contention is that if you have a database which spreads out that far, 
your mreadtm will go up too as the heads will have to move greater 
distances as well to serve random multiblock reads by different sessions.

> RAC and system stats:
> 
> CPUspeed is the one which should have multiple values per instance, I
> agree, someone just forgot RAC, or did not have time to implement. For
> sread/mread I also agree, those should be per segment.
> In that same system in mind, we're already splitting the data on
> different arrays based on access patterns and it will not make sense
> to use the same system stats to cost both set of objects.
> 

I disagree with gathering sreadtm, mreadtm, and mbrc by segment. That 
would generate too much data which the CBO has to process at parse time 
for too little gain. There are other areas in the CBO which have the 
potential for much greater payback or less overhead. Like getting a grip 
on the caching efficiency of different segments or dynamically 
correcting stark cardinality errors a-la profiles.

-- 
Regards

Wolfgang Breitling
Centrex Consulting Corporation
www.centrexcc.com
--
//www.freelists.org/webpage/oracle-l

Other related posts: