Alan, I beg to differ here. Caching tables work excellently on paper. But when you put that in to practice it will be otherwise. In real life, they will not cached in the buffer cache and also they are subject to the normal LRU or touchcount aging. There is a _small_table_threshold defines the tables which are eligible for caching and/or when the table is bigger than 2% of the buffer cache they will not be cached. So the point here is, for the OP, with the 200-300M tables will NOT be cached in the buffer cache unless he sets the db_cache_size (buffer cache) 10-15G. On 7/10/07, Allen, Brandon <Brandon.Allen@xxxxxxxxxxx> wrote:
I think you'd still be better off to cache those full table scans in the Oracle cache (e.g. alter table my_fts_tab cache). That way, you avoid having to copy them from the OS cache to the DB cache and all the overhead that is involved with performing a consistent get, which would make your performance even better. For more info: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statement s_7002.htm#i2215507 -----Original Message----- From: Chris Dunscombe [mailto:chris@xxxxxxxxxxxxxxxxxxxxx] One situation I've experienced was a smallish (< 250GB) third-party online operational database on Solaris where the OS cache acted as a cache for Full Table scans of tables around the 100-300 MB size. This worked well although it was more by accident than design.
-- Best Regards, K Gopalakrishnan Co-Author: Oracle Wait Interface, Oracle Press 2004 http://www.amazon.com/exec/obidos/tg/detail/-/007222729X/ Author: Oracle Database 10g RAC Handbook, Oracle Press 2006 http://www.amazon.com/gp/product/007146509X/ -- http://www.freelists.org/webpage/oracle-l