Re: Maximum Db_cache_size?

  • From: "Jonathan Lewis" <jonathan@xxxxxxxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 20 Jun 2006 08:04:28 +0100



A couple of algorithmic issues with buffer cache size -

A "small" table is loaded at the mid-point when you do a tablescan. "small" is defined
as 2% of the buffer cache. So you can be
unlucky and find that increasing the cache
dramatically just happens to cause a lot more
tablescans to enter at the mid-point, thus
knocking a lot of useful blocks out of the
cache prematurely.


When there are a lot of free buffers in the buffer
cache, then they are preferentially used for creating CR copies of datablocks. In theory
there is a limit of 6 (_db_block_max_cr_dba) CR copies of a block - Oracle 9 seems to be
better at sticking to this than earlier copies
(possibly as a side-effect of other changes), but
you can get lots more than 6. When this happens,
you can get extreme 'cache buffers chains latch' contention against blocks that have large numbers of copies. Creating an unnecessarily large buffer cache may leave you in a state where you frequently
end up with excessive copies of popular blocks.


Until 10g, actions like shrink segment, drop table,
truncate table, require a search of the buffer cache
to mark buffers as free. The time to clear ought to
be related to the number of blocks in the object (in
8i and 9i) but if you have a large cache with lots
of CR clones, the impact of regular drops etc. could
be exaggerated.


On the flip side - there is a bug in 8i and 9i relating
to the touch count. When reading blocks that have
been loaded by tablescan (or index fast full scan) Oracle 'forgets' to increment the touch count. This
means that even "small" tables fall out of the buffer
cache fairly promptly - no matter how popular they
are. Consequently creating a KEEP cache large enough for all the tables that are frequently scanned and allocating those tables to the KEEP cache can
reduce the I/O component - especially if you are
bypassing the filesystem. (Bug is fixed in 10g).



Regards

Jonathan Lewis
http://www.oracle.com/technology/community/oracle_ace/ace1.html#lewis

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html

Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html


Subject: Maximum Db_cache_size? Date: Mon, 19 Jun 2006 10:43:37 -0000 From: "Laimutis Nedzinskas" <Laimutis.Nedzinskas@xxxxxxxxxxxxx>

Is it good or bad to use all available (not used for other purposes, eg.
shared pool or pga) RAM for buffer cache?
I am talking about 32GB RAM range.

Oracle is not in-memory database but anyway - memory access is faster
than disk access. However I understand that the way(algorithms) Oracle uses RAM may have a
practical turning point when adding more RAM will only slow down things.


Therefor the question is:

- is Oracle(9.2 version) better at utilizing RAM(say, 10-20GB) for
buffer cache?

- or is it it is better to let file system to utilize this RAM for file
cache?

- none of the above, stay with moderate RAM usage (few gigabytes for
buffer cache) just because buffer cache hit ration is good(98-99%)? I
would just like to point out that 1% of disk ("raw") access makes up a
considerable response time, may be 50% or so.


-- //www.freelists.org/webpage/oracle-l


Other related posts: