Re: db_file_multiblock_read_count and 10g

  • From: Nuno Souto <dbvision@xxxxxxxxxxxx>
  • Date: Mon, 26 Jan 2009 10:22:26 +1100

Greg Rahn wrote,on my timestamp of 26/01/2009 8:15 AM:

So far, I've been getting same or better performance
setting it to the SAN's stripe block size rather than letting it set itself.
10.2.0.3 here.

I'm assuming I/O efficiencies (elapsed times), but are the execution
plans the same?

Yes, very much the same. Otherwise it wouldn't make sense to compare although of course some plans are better than others.


What performance differences do you see in a 64KB net MBRC (your strip
size) setting vs a 1MB MBRC I/O size?

I've some seen cases on some SANs where smaller I/O sizes actually
yield better overall I/O times.  This has been attributed to I/O
prefetch algorithms on the SAN  But on the other hand I've seen cases
where it doesn't matter either way.

Indeed! Exactly what I'm seeing. But I've turned off all pre-fetching at SAN level, it doesn't make much sense in our mixed environment and is downright counter-productive. Very similar figures between 1MB and 64K.

Our RAID-10 is set to 64K stripe - BTW, that is not guaranteed: it varies in multiples of 64K with the number of mirrored pairs in the stripe, always check! - and testing shows the same or indeed slightly better IO throughput out of 64K than 1M. Although of course our OS can do 1M max.

At this stage, I'm not prepared to say 64K is definitely the better option, still some unexplained variance: for example, I've got consistently better performance out of one of the two Storage Processors in our Clarion and EMC still has to satisfactorily explain why. And I must even out the SAN mirroring to the DR site, that one does introduce some major quirks even when reading, probably by monopolizing one of the SPs. But we'll get there.


--
Cheers
Nuno Souto
in hazy Sydney, Australia
dbvision@xxxxxxxxxxxx
--
//www.freelists.org/webpage/oracle-l


Other related posts: