How is AUTO db_file_multiblock_read_count calculated?

  • From: PD Malik <pdthedba@xxxxxxxxx>
  • To: Oracle-L Group <oracle-l@xxxxxxxxxxxxx>, sacrophyte@xxxxxxxxx, top@xxxxxxxxxxxx, info@xxxxxxxxxxxxxxxxxxxxx, jonathan@xxxxxxxxxxxxxxxxxx
  • Date: Wed, 6 Jul 2011 13:36:59 +0100

Hello Experts,

Starting 10gR2, standard recommendation is to leave
db_file_multiblock_read_count unset so that Oracle can can calculate its
value but I've got some Questions around it :

1. How ? : How does Oracle calculate it? I did a bit of Google digging and
found a couple of hits - one from Jonathan Lewis saying it (actually
_db_file_exec_read_count  which it is deriving it from)
is db_buffer_cache/sessions (init parameter) and then from Charles
Schultz<http://www.blogger.com/profile/07973399674184183130>
 blog that its db_buffer_cache/processes (init parameter) - unfortunately
its none of them exactly in my case. Here are the values :

db_cache_size : 7GBs
processes : 10000
sessions : 20000
db_file_multiblock_read_count  : 42 (It was 43 before 10.2.0.5 upgrade a
week before).

As you can see its not fitting into any of the two calculations perfectly.
But I am not surprised because (actually I'd have surprised if it had worked
that simply) its the parameter about how a system's IO is performing so
there got to be something else in that calculation? Exactly my Question -
What is that 'something else' and how is that calculated!?

2. Why I bother? : Everyone seems to say leave is as it is but my
questions/concerns are :

i. How did it get changed last week? I need to know how it works so that I
can anticipate when its gonna change and to what value from stability point
of view. (I can only guess that the formula for its calculation changed in
10.2.0.5 but is this the right answer?)
ii. What is that 'something else' on which this parameter depends as I asked
in my last question becs if we change that something else unknowingly we may
end up in a surprise?
iii. It seems this parameter is set at DB/instance startup time - but thats
not the best time to know how my system's IO is performing. I believe this
parameter represents (or atleast intends to) the 'adjusted' or real value of
my multi block gets - but that can only be known on a typical workload (when
the buffer cache has got the usual stuff in it) and not at the instance
startup time so is this value really reliable for the purpose its intending
to fulfil?

Considering all this - Is Christian Angonini's approach of manually
calculating this parameter the best then?

Or as usual am I missing something basic please?

Sorry I wanted to try it myself on my home DB but need the answer a bit
urgently so been lazy hence resorting to your help.

Thanks.

Other related posts: