Re: Buffer Hit ratios - Oracle still hasn't got it

  • From: "Niall Litchfield" <niall.litchfield@xxxxxxxxx>
  • To: p.mclarty@xxxxxxxxxx
  • Date: Thu, 26 Apr 2007 12:24:22 +0100

I don't really have too much trouble with that snippet to be honest.
Especially given the context that it is in, the quote below being the
immediately preceding text


There are many factors to examine before considering whether to increase
or decrease the buffer cache size. For example, you should examine
V$DB_CACHE_ADVICE data and the buffer cache hit ratio.

A low cache hit ratio does not imply that increasing the size of the cache
would be beneficial for performance. A good cache hit ratio could wrongly
indicate that the cache is adequately sized for the workload.

To interpret the buffer cache hit ratio, you should consider the
following:

   -

   Repeated scanning of the same large table or index can artificially
   inflate a poor cache hit ratio. Examine frequently executed SQL statements
   with a large number of buffer gets, to ensure that the execution plan for
   such SQL statements is optimal. If possible, avoid repeated scanning of
   frequently accessed data by performing all of the processing in a single
   pass or by optimizing the SQL statement.
   -

   If possible, avoid requerying the same data, by caching frequently
   accessed data in the client program or middle tier.
   -

   Oracle blocks accessed during a long full table scan are put on the
   tail end of the least recently used (LRU) list and not on the head of the
   list. Therefore, the blocks are aged out faster than blocks read when
   performing indexed lookups or small table scans. When interpreting the
   buffer cache data, poor hit ratios when valid large full table scans are
   occurring should also be considered.

     Note:
   Short table scans are scans performed on tables under a certain size
   threshold. The definition of a small table is the maximum of 2% of the
   buffer cache and 20, whichever is bigger.



   -

   In any large database running OLTP applications in any given unit of
   time, most rows are accessed either one or zero times. On this basis, there
   might be little purpose in keeping the block in memory for very long
   following its use.
   -

   A common mistake is to continue increasing the buffer cache size.
   Such increases have no effect if you are doing full table scans or
   operations that do not use the buffer cache.


If you do have a "few gets/per execution" style app then a low cache hit
ratio *is* someting to be *investigated*. This is a world away from the sort
of advice that was around in the later 90s along the lines of 'if hit ratio
X is < 95% (or 99 or whatever)' then increase the buffer cache until you
exceed this target figure.  I think there's a real danger now in the Oracle
world of *over* reacting to the hit ratio target myths and of saying that
there are no circumstances under which we should care whether data is
found in memory or has to come from disk - I realize you didn't say that,
I'm just hijacking the thread a bit :( .


On 4/26/07, Peter McLarty <p.mclarty@xxxxxxxxxx> wrote:

I was just reading the 10.2 Performance Tuning Guide and found this little
gem. I thought it should have been written out by now

"7.2.3.1 Increasing Memory Allocated to the Buffer Cache

As a general rule, investigate increasing the size of the cache if the
cache hit ratio is low and your application has been tuned to avoid
performing full table scans."

Clearly no one in Oracle has updated the Tech Writers for some time

Cheers

--
Peter McLarty
Database Administrator
Student System Upgrade Project
Central Queensland University

Email: p.mclarty@xxxxxxxxxx
Phone: 07 4923 2876
Fax: 07 4923 2721
Mobile: 04 0209 4238

--
//www.freelists.org/webpage/oracle-l





--
Niall Litchfield
Oracle DBA
http://www.orawin.info

Other related posts: