Re: Global Cache and Enqueue Services statistics

  • From: Subodh Deshpande <deshpande.subodh@xxxxxxxxx>
  • To: riyaj.shamsudeen@xxxxxxxxx
  • Date: Fri, 3 Aug 2012 10:40:09 +0530

Riyaj/Amir,
this is ebs suite..product of oracle..

your point 1, statement no 2,

Quote
 Investigate the table below and review the indexes on them. If the
application is populating the rows through sequence or if the indexes are
almost unique, then you might want to consider hash partitioning those
indexes or dropping those indexes.
UnQuote

dropping and recreating/rebuilding indexes will not be an issue.
changing to hash partitioning will be considered as architectural change..
better to ask oracle/MOS for permanent fix from performance point of view,
your scenario is not going to be unique is..secondly tomorrow if any patch
is released and that have any fix related to this object, what ever you are
doing that you may have to redo/reconsider..you will have to document what
you are doing

if it is insert then it might be taking the data in some intermediate
tables and then putting it into target tables...

Point no 2

Quote
I would increase cache to 30 and sleep_time
to 60 seconds, but understand that can delay the requests by one minute
which may not be optimal for high frequency concurrent requests such as
inventory workers or RVCTP etc
Unquote

the next request will be taken only after 'ONE MINUTE' when you say delay
60 seconds..better consider having different managers for fast, slow,
immediate and other concurrent requests..there you can define different
sleep time, delay etc..

thanks..subodh

On 1 August 2012 22:53, Riyaj Shamsudeen <riyaj.shamsudeen@xxxxxxxxx> wrote:

> Hello Amir
>    Thanks for the data.
>   ( I have hidden the table and object names to protect your privacy. But,
> if you refer to node 1 AWR reports, you would instantly identify the
> object  names )
>   Intensity of the problem is listed in the order.
> 1. There are high amount buffer busy/gc buffer busy waits on  indexes on
> one table. That table has high insert concurrency too .
>
>     Investigate the table below and review the indexes on them. If the
> application is populating the rows through sequence or if the indexes are
> almost unique, then you might want to consider hash partitioning those
> indexes or dropping those indexes.
>
>     From the names of the table and indexes, I am guessing that all three
> indexes in the top of gc buffer busy list is on the table listed below
> statement with sql_id 4hzg9pmxdvuyz.
>
>   Elapsed      CPU                  Elap per  % Total
>   Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
> ---------- ---------- ------------ ---------- ------- -------------
>
>     14,072        141      330,725        0.0    13.2 4hzg9pmxdvuyz
> Module: REP45XX_CP
> INSERT INTO TXRMI0_XXX_XXXX_XXX_TMP VALUES (:B13 , :B12 , :B11 , :B1 , :B10
> , :B9 , :B8 , :B7 , :B6 , :B5 , :B4 , NULL, NULL, 'Current
> supplyXXXXXXXXXX' .
>
> node 2:
>    19,982        217      531,595        0.0    18.6 4hzg9pmxdvuyz
> Module: REP084522_CP
> INSERT INTO TXRMI0_XXXXXXXXXXXXX_TMP VALUES (:B13 , :B12 , :B11 , :B1 ,
> :B10
> , :B9 , :B8 , :B7 , :B6 , :B5 , :B4 , NULL, NULL, 'Current supply XXXXX
>
> node 1:
>     22,699        318      632,046        0.0    18.0 4hzg9pmxdvuyz
> Module: REP84522_CP
> INSERT INTO TXRMI0_XXXXXXXXXX+TMP VALUES (:B13 , :B12 , :B11 , :B1 , :B10
> , :B9 , :B8 , :B7 , :B6 , :B5 , :B4 , NULL, NULL, 'Current supply meets
> demand.
> Order is not required in OQIT.', SYSDATE, :B3 , :B2 , 'SUCCESS', NULL)
>
>
> Segments by Buffer Busy Waits
> -> % of Capture shows % of Buffer Busy Waits for each top segment compared
> -> with total Buffer Busy Waits for all segments captured by the Snapshot
>
>                                                                   Buffer
>            Tablespace                      Subobject  Obj.          Busy
> % of
> Owner         Name    Object Name            Name     Type         Waits
> Capture
> ---------- ---------- -------------------- ---------- ----- ------------
> -------
> REP        REPLACED_ID RIYAJ_INDEX1            INDEX       12,398   18.02
> REP        REPLACED_ID RIYAJ_INDEX2            INDEX       12,092   17.58
> APPLSYS    XXXXAOL_TB  FND_CONCURRENT_REQUE    TABLE       10,911   15.86
> REP        REPLACED_ID RIYAJ_INDEX3            INDEX        7,083   10.30
> APPLSYS    XXXXAOL_ID  WF_ITEM_ATTRIBUTE_VA    INDEX        5,443    7.91
>           -------------------------------------------------------------
>
> Segments by Global Cache Buffer Busy
> -> % of Capture shows % of GC Buffer Busy for each top segment compared
>  -> with GC Buffer Busy for all segments captured by the Snapshot
>
>                                                                       GC
>            Tablespace                      Subobject  Obj.        Buffer
> % of
>  Owner         Name    Object Name            Name     Type          Busy
> Capture
> ---------- ---------- -------------------- ---------- ----- ------------
> -------
> REP        REPLACED_ID RIYAJ_INDEX1            INDEX       52,543   16.67
> REP        REPLACED_ID RIYAJ_INDEX2            INDEX       48,681   15.44
> REP        REPLACED_ID RIYAJ_INDEX3            INDEX       43,649   13.85
> REP        REPLACED_TB SOME_OTHER_T            TABLE       18,214    5.78
> INV        MFGPINV_ID  MTL_SUPPLY_N10          INDEX        9,614    3.05
>
> 2. fnd_concurrent_requests table is suffering from high gc buffer busy
> waits too. Check if sleep_time and cache is properly set for managers with
> higher number of workers (Ebiz). Since, the concurrent  manager threads
> constantly peak in to fnd_Concurrent_requests table to look for work and if
> your managers are running from all nodes, then gc buffer busy waits occurs
> on   fnd_Concurrent_requests table. Also, verify that these sql_id are
> efficient. at3hqt06quby2 and 8s9vxqq4rp290. Both accessing
> fnd_Concurrent_requests table. I would increase cache to 30 and sleep_time
> to 60 seconds, but understand that can delay the requests by one minute
> which may not be optimal for high frequency concurrent requests such as
> inventory workers or RVCTP etc.
>
> 3. Row locking contention can be related to (2). FND managers select rows
> to process and they can run in to locking contention if cache and sleep
> time is not set properly.
>
> Segments by Row Lock Waits         DB/Inst: M24VT3/m24vt301  Snaps:
> 3626-3656
> -> % of Capture shows % of row lock waits for each top segment compared
> -> with total row lock waits for all segments captured by the Snapshot
>
>                                                                      Row
>            Tablespace                      Subobject  Obj.          Lock
> % of
> Owner         Name    Object Name            Name     Type         Waits
> Capture
> ---------- ---------- -------------------- ---------- ----- ------------
> -------
> APPLSYS    XXXX_TB FND_CONCURRENT_REQUE            TABLE       22,250
> 55.23
> APPLSYS    XXXX_TB FND_USER                        TABLE        5,622
> 13.96
>
> 4. There were 29 gc remaster events too, but I think, that is not causing
> problems to you. You might want to review which segment is undergoing those
> issues. LMD and LMON trace files will be useful to review that..
>
> HTH
> ( Sorry, removing some part of thread here as I am got overposting error
> earlier)
> Cheers
>
> Riyaj Shamsudeen
> Principal DBA,
> Ora!nternals -  http://www.orainternals.com - Specialists in Performance,
> RAC and EBS
> Blog: http://orainternals.wordpress.com
> OakTable member http://www.oaktable.com and Oracle ACE Director
>
> Co-author of the books: Expert Oracle
> Practices<http://tinyurl.com/book-expert-oracle-practices/>,
> Pro Oracle SQL,  Expert PL/SQL
> Practices<http://tinyurl.com/book-expert-plsql-practices>
>
> Join me for next RAC training in Fall
> 2012<http://www.orainternals.com/services/training/advanced-rac-training/
> >:
>
> <http://tinyurl.com/book-expert-plsql-practices>
>
>
>
> >
> > On Wed, Aug 1, 2012 at 11:21 AM, Riyaj Shamsudeen <
> > riyaj.shamsudeen@xxxxxxxxx> wrote:
> >
> >> Amir
> >>   Can you post the top SQL by cluster wait time and also segments
> >> suffering from high buffer busy waits/gc buffer busy waits? Most
> probably,
> >> gc buffer busy waits are causing downstream initrans based locking
> >> contention.If the statement is insert statement and if the segment is
> index
> >> segment, can you tell us how is column values are generated?
> >>    Better yet, I wouldn't mind if you can send whole AWR report so that
> >> we can give you little bit more insights.
> >> Cheers
> >>
> >> Riyaj Shamsudeen
> >> Principal DBA,
> >> Ora!nternals -  http://www.orainternals.com - Specialists in
> >> Performance, RAC and EBS
> >> Blog: http://orainternals.wordpress.com
> >> OakTable member http://www.oaktable.com and Oracle ACE Director
> >>
> >> Co-author of the books: Expert Oracle Practices<
> http://tinyurl.com/book-expert-oracle-practices/>,
> >> Pro Oracle SQL,  Expert PL/SQL Practices<
> http://tinyurl.com/book-expert-plsql-practices>
> >>
> >> Join me for next RAC training in Fall 2012<
> http://www.orainternals.com/services/training/advanced-rac-training/>:
> >>
> >> <http://tinyurl.com/book-expert-plsql-practices>
> >>
> >>
> >
>
>
> --
> //www.freelists.org/webpage/oracle-l
>
>
>


-- 
=============================================
This Gmail Account will be deactivated  in One Months Time
=============================================


--
//www.freelists.org/webpage/oracle-l


Other related posts: