Re: SGA & shared pool size

  • From: Tanel Poder <tanel@xxxxxxxxxx>
  • To: exriscer <exriscer@xxxxxxxxx>
  • Date: Tue, 20 Apr 2010 10:57:40 +0800

(resending to right list)

Hi,

Which bug is it, do you know?

If you still get ORA-4031s with a single-subpool shared pool, then perhaps
there's a problem with the sub-subpool sizing (4 different pools used for
objects with different expected lifetimes).

Unfortunately you'd need to take a shared pool dump to find the sizes of the
sub-subpools.... and the SP dump probably hangs your instance for a while...

The workaround would be to use _enable_shared_pool_durations = false to keep
everything in a single sub-subpool and reduce the memory pressure that way.

It's documented in ML note 843638.1


--
Tanel Poder
http://tech.e2sn.com
http://blog.tanelpoder.com




On Tue, Apr 20, 2010 at 3:47 AM, LS Cheng <exriscer@xxxxxxxxx> wrote:

> Hi
>
> We dont really have problems with storing sql and sql plans nor pl/sql
> objects. This is DWH so we dont have much pressure on SQL statements.
>
> This is 10.2.0.4.2 and as I mentioned previously most probably is we are
> hitting a RAC bug that is why we need so large shared pool. We have had
> smaller shared pool in other nodes like 3gb to 4gb and the time the instance
> takes to fail is pretty short.
>
> We have had many problems with shared pool, in the past we hit ORA-04031
> constantly and had to decrease subpool to 1, even after that we still hit
> 4031 time to time, sometimes we get the error such as GES resource
> exhausted.
>
> Did I ask who has seen 7 million entries in rowcache...? I think that is
> probably one of our problem but nothing much can be done since heavy
> partitions and subpartitions are present.
>
> Previously with smaller shared pool (7GB) system hangs and reboots almost
> everyday, somehow LCK kicks in to free memory and while it frees the memory
> the entire node hangs, sometimes hangs other nodes as well because some
> local process hold CF enqueue or sometimes the instance just fails with
> waited too long for a row cache lock.
>
> Thanks
>
> --
> LSC
>
>
>
>
> On Mon, Apr 19, 2010 at 9:21 PM, Marcin Przepiorowski <pioro1@xxxxxxxxx>wrote:
>
>> Hi,
>>
>> I have always think that if you increasing shared pool to enormous
>> size like 22 GB
>> you are generating more problems for database and decrease performance due
>> to
>> long time spend on shared pool management - did you ever try to
>> decrease shared pool
>> to 2 or 3 GB ?
>> How many SQL with plans are you keep in shared pool ? how much pl/sql
>> code do you have ?
>> and maybe first of all what version do you have ?
>>
>> regards,
>> --
>> Marcin Przepiorowski
>> http://oracleprof.blogspot.com
>>
>>
>> On Fri, Apr 16, 2010 at 8:13 AM, LS Cheng <exriscer@xxxxxxxxx> wrote:
>> > Hi
>> >
>> > I configured one a few weeks ago with 48G SGA and 22GB shared pool for a
>> > DWH.
>> >
>> > I think I hit some bug that is why I need to set the shared pool so
>> large
>> > because this is a 6 nodes RAC and whenever shared pool is under pressure
>> > process LCK starts to free shared pool memory (observed from v$rowcache
>> and
>> > v$sgastat) and holds the shared pool latch which cause hang in the node
>> > (sometimes the entire system, all 6 nodes). Previously I had shared pool
>> set
>> > to 8GB and the hang situation happened everyday (many times instance
>> dies),
>> > when I increased to 22GB it only happens like once every couple of
>> weeks.
>> >
>> > v$rowcache shows 7 million entries, anyone seen larger than that? I am
>> > really curious.
>> >
>> >
>> > Thanks
>> >
>> > --
>> > LSC
>> >
>> >
>> >
>> >
>> > On Fri, Apr 16, 2010 at 3:55 AM, Ram Raman <veeeraman@xxxxxxxxx> wrote:
>> >>
>> >>
>> >> I saw a big database today with an SGA size of 56GB and a shared pool
>> of
>> >> 12Gb. The SGA_TARGET was set to 0.  This database size is a few
>> terabytes. I
>> >> have not heard of such big SGA or SP sizes. I thought having big memory
>> >> sizes may cause latch contention and CPU issues.
>> >>
>> >> What is the maximum SGA and SP sizes of DBs that you have seen.
>> >>
>> >> Thanks.
>> >
>>
>>
>>
>> --
>>
>
>

Other related posts: