RE: db_file_multiblock_read_count causing full scans to takelonger?

  • From: "Allen, Brandon" <Brandon.Allen@xxxxxxxxxxx>
  • To: "Kevin Lidh" <kevin.lidh@xxxxxxxxx>
  • Date: Tue, 19 Dec 2006 16:42:56 -0700

Understood - I just wanted to make sure you weren't making decisions
just based on that small sample.

Interesting - I tested "time dd" on my HPUX 11 system and I see the same
behavior you describe - it runs fastest at bs=255k and then takes twice
as long as soon as I increase to bs=256k or higher.  However on my AIX
5.3 system it seems to max out at about bs=32k and stays pretty
consistent all the way to 4096k.  Seems like some sort of HPUX
limitation.  I think the max io size on HPUX is 256k so it makes sense
that there would be some difference at that point, but I would expect
the difference to occur at 257k, not 256k - and I wouldn't expect it to
be such a big difference.  You'd think HPUX could split up the larger
read requests into 255k chunks a little more efficiently than that.
I'll check on a couple more AIX and HPUX systems and see if they exhibit
the same behavior too.



-----Original Message-----
From: Kevin Lidh [mailto:kevin.lidh@xxxxxxxxx] 

That was just an example from the two trace files.  I wanted to show the
difference in times retrieving the exact same blocks. 

The "time dd" tests are very consistent.  Any block size up to 255k
performs the same.  Once you get to 256k and beyond, the time increases
3x to 4x.




Privileged/Confidential Information may be contained in this message or 
attachments hereto. Please advise immediately if you or your employer do not 
consent to Internet email for messages of this kind. Opinions, conclusions and 
other information in this message that do not relate to the official business 
of this company shall be understood as neither given nor endorsed by it.

--
//www.freelists.org/webpage/oracle-l


Other related posts: