Re: Using AMM with Linux RAC

  • From: Christo Kutrovsky <kutrovsky.oracle@xxxxxxxxx>
  • To: troach@xxxxxxxxx
  • Date: Tue, 9 Mar 2010 14:43:27 -0500

I don't think this is possible in current architecture.

I think easiest for this is to dynamic part of PGA in SGA :)


On Thu, Feb 18, 2010 at 12:00 PM, Thomas Roach <troach@xxxxxxxxx> wrote:

> Tanel,
>
> Is it possible for Oracle to eventually have AMM and hugepages support?
>
> I think I recall some blog posts by Kevin Closson on this topic and I don't
> recall if it was a limitation with the memory being able to dynamically
> resize and hugepages not supporting that?
>
> Thanks,
>
> Tom
>
>
> On Thu, Feb 18, 2010 at 11:02 AM, Tanel Poder <tanel@xxxxxxxxxx> wrote:
>
>> Hi Sherrie,
>>
>> The memory which is reserved for PGA_AGGREGATE_TARGET will not show up in
>> /dev/shm as it's not shared (PGAs are still allocated using process-private
>> memory).
>>
>> You can query V$MEMORY_DYNAMIC_COMPONENTS to see how Oracle is currently
>> using the memory:
>>
>> SQL> select component, current_size from v$memory_dynamic_components where
>> component like '%Target%';
>>
>> COMPONENT                      CURRENT_SIZE
>> ------------------------------ ------------
>> SGA Target                        545259520
>> PGA Target                        293601280
>>
>>
>> Note that hugepages are not used with AMM on Linux so you may be not
>> getting all the performance out of your hardware...
>>
>> --
>> Tanel Poder
>> http://tech.e2sn.com
>> http://blog.tanelpoder.com
>>
>>
>> On Thu, Feb 18, 2010 at 9:51 PM, Sherrie Kubis <
>> Sherrie.Kubis@xxxxxxxxxxxxxxxxxx> wrote:
>>
>>>  We have a 3-node (each 32 gb ram) Linux RAC cluster running Oracle
>>> 11.1.0.7, housing 4 instances with a total memory_target of 8.75 gb.  OEM
>>> shows 29% ram is used. We are using AMM.
>>>
>>>
>>>
>>> The free -mt command shows:
>>>
>>>
>>>
>>>              total       used       free     shared    buffers     cached
>>>
>>> Mem:         32189      31519        669          0       1250      25168
>>>
>>> -/+ buffers/cache:       5101      27088
>>>
>>> Swap:        32767         39      32728
>>>
>>> Total:       64957      31559      33397
>>>
>>>
>>>
>>> It looks like tmpfs is 16gb:
>>>
>>>  df -k /dev/shm
>>>
>>> Filesystem           1K-blocks      Used Available Use% Mounted on
>>>
>>> tmpfs                 16481056   5604532  10876524  35% /dev/shm
>>>
>>>
>>>
>>> We are seeing 5.6gb of memory used, so I'm confused about why the SGAs
>>> are defined to consume 8.75gb, but we only see 5.6gb used.  I read that the
>>> SGA, by default, takes 60% of memory_target and pga gets the rest. Because
>>> there are not yet many connections, is this why we see a lower value?
>>>
>>>
>>>
>>> Does the tmpfs at 16gb mean we can only use 16gb of ram for our
>>> databases?
>>>
>>>
>>>
>>> Any insights would be appreciated.  We are just starting out with Linux
>>> and Posix-Style shared memory management, quite different than shared
>>> semaphore memory segments that we were used to.
>>>
>>>
>>>
>>> *Sherrie Kubis *
>>>
>>> Southwest Florida Water Management District
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> IMPORTANT NOTICE:  All E-mail sent to or from this address are public 
>>> record and archived.  The Southwest Florida Water Management District does 
>>> not allow use of District equipment and E-mail facilities for non-District 
>>> business purposes.
>>>
>>>
>>>
>>
>
>
> --
> Thomas Roach
> 813-404-6066
> troach@xxxxxxxxx
>



-- 
Christo Kutrovsky
Senior Consultant
Pythian.com
I blog at http://www.pythian.com/blogs/

Other related posts: