RE: I/O and db_file_multiblock_read_count

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <mwf@xxxxxxxx>, <kevin.lidh@xxxxxxxxx>, "Kevin Closson" <kevinc@xxxxxxxxxxxxx>
  • Date: Mon, 11 Dec 2006 17:44:15 -0500

I see there has been a lot of discussion since I wrote that back. Not sure
why it didn't send at 11 AM when I wrote it ( Probably controlled flight
into terrain.)

Anyway factor in all the smart analysis in the interim including the stuff
about Lunix breaking up i/o requests gratuitously.

Looks like knowing what the sweet spot is for your exact disk farm is a good
thing to know when you set mbrc, and there may still be room for dinosaurs
who set up disk farms to fit the anticipated service requests. Not sure
though - SSD is getting awfully competitive with the cost of spinning rust
if you have to figure in more human smarts to set up the array with any sort
of assymetry to serve a use.

I'm still thinking that the data was coming from array cache though, but I
may have tangled up service requests for i/o with the blocking requested by
dd, and that may not be identical in all the layers down to the rust.

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]On
Behalf Of Mark W. Farnham
Sent: Monday, December 11, 2006 5:24 PM
To: kevin.lidh@xxxxxxxxx; Kevin Closson
Cc: oracle-l
Subject: RE: I/O and db_file_multiblock_read_count

So it appears that something on the OS side of things IS actually penalizing
you for fewer, bigger drinks.

You're getting about just a shade over 1 millisecond service time per record
at 131072 (clearly from some layer of cache unless you have solid state
disks) while you're limping along at almost 25 miliseconds to get a record
only 8 times as big.

Now if the data were not already in some layer of cache I'd be thinking you
were paying double seeks and hitting two different stripes to get those 1 MB
chunks. But since it *seems* from the first test the data *IS* in cache,
that makes no sense unless there is a *feature* somewhere in the stack of
your disk farm access that makes a presumption it will need to go to disk
for drinks larger than *x* bytes, where *x* is some number bigger than
131072 and less than 1048576.

Humor me and try half a million. If that is fast, try 1,000,000. (Yeah, I
know, not a power of two, but we don't know whether the person who
configured the disk farm like decimal numbers.)

If that is slow, try 262144.

In any case it seems like Oracle is off the hook on this one, and you don't
even need to go to the fancier broken out reading of chunks more precisely
as Oracle would in the scan.

Whether your SAN can be tamed to give you fast response for the bigger
drinks you would like is very tightly bound to the exact hardware and
software you have and how it is configured. If you solve that though, it is
very likely the response from within Oracle will be what you expect too.

Regards,

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]On
Behalf Of Kevin Lidh
Sent: Monday, December 11, 2006 9:39 AM
To: Kevin Closson
Cc: oracle-l
Subject: RE: I/O and db_file_multiblock_read_count

I apologize for the tardiness of my response.  This is the result of
timing the dd.  It seems to indicate the same thing as the scattered
read test.  Or am I reading that wrong?

(8k * 16)
Unix> time dd if=/ora/data/bordcdw/users/users01.dbf of=/dev/null
bs=131072
1290+1 records in
1290+1 records out

real        1.4
user        0.0
sys         0.5

(8k * 128)
Unix> time dd if=/ora/data/bordcdw/users/users01.dbf of=/dev/null
bs=1048576
161+1 records in
161+1 records out

real        4.0
user        0.0
sys         0.0


On Fri, 2006-12-08 at 15:48 -0800, Kevin Closson wrote:
> do the "same" thing using dd. Whatever your db_block_size is, plug it
> in as follows:
>
> $ time dd if=<datafile_for_the tablespace> of=-/dev/null
> bs=<block_size_in_bytes*16>
>
> then re-run:
> $ time dd if=<datafile_for_the tablespace> of=-/dev/null
> bs=<block_size_in_bytes*128>
>
> please let me know what you find
>
>
>
>
>         ______________________________________________________________
>         From: oracle-l-bounce@xxxxxxxxxxxxx
>         [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Kevin Lidh
>         Sent: Friday, December 08, 2006 12:46 PM
>         To: oracle-l
>         Subject: I/O and db_file_multiblock_read_count
>
>
>
>         I was reading an article about the appropriate setting for
>         db_file_multiblock_read_count.  I'm on a HP-UX 11.11 64-bit
>         system with Oracle 9.2.0.7.0.  The original value was 16 and I
>         bounced the database and ran a million record full-scan test
>         (10046 trace) and then set the value to 128 (max value) and
>         re-ran the

--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l


Other related posts: