Riyaj, thanks for the answer. > If your question is, "why the memory advisors prefer to allocate more memory > for PGA instead of SGA?", > then that is probably some bug in the code causing overallocation to PGA. > > I have a few questions before I can recommend : > > 1. Can you share the top wait events section from AWR report? Essentially "db file sequential read", "db file scattered read" and "DB CPU" The load varies a bit, but those are usually among the top 5. > 2. Does your application use PX execution extensively? No, not extensively (but occassionally I'd say) > Does the execution plans have heavy full table scans on bigger tables or more > OLTP? > Remember that FTS might trigger direct path reads and require more PGA. There are some FTS on large (> 5 Million rows) tables. The load is a mix of OLTP and datawarehouse/aggregation style queries. > If I were you (knowing that the supplied data is inadequate), > I would set lower bound values for db_cache_size, shared_pool_size, > large_pool_size, java_pool_size, etc. > Memory advisors will honor that lower bound and keep at least that much > memory for that area. > This will prevent overallocation to PGA from SGA. As a first step I disabled automatic memory management and switched to automatic shared memory management + automatic PGA management. Since then most (~84%) of the SGA (total 50GB) is used for the buffer cache. The PGA seems to stabilize at around 4GB - but then our large aggregation statements have not yet been run. > Also, if there is a bigger PGA allocated, RDBMS code seems to have tendency > to trigger more direct path reads, > which in turn, might need more memory for PGA, a cyclic, invalid logic. I > have seen, a few databases suffering from this chronic issue. Thanks for the hint, I wasn't aware that direct path reads would increase the PGA. Regards Thomas -- //www.freelists.org/webpage/oracle-l