Re: Mysterious high consistent gets;

  • From: Tim Gorman <tim.evdbt@xxxxxxxxx>
  • To: Ashish Lunawat <ashish.lunawat@xxxxxxxxx>
  • Date: Mon, 18 Jun 2018 13:51:15 -0600

Block changes at 271/sec (i.e. 6th entry in "Load Profile") isn't "quite small", and neither is redo size at 63 KB/s (i.e. 4th entry in "Load Profile").  DML is happening, and it isn't insignificant.

It works best if you know what is going on in the LoadRunner test; ask what it is being tested, and find out exactly what is being done.

Also, check the SQL Statistics section of the AWR report for DML against the same table(s) that the query in question.  If nothing else, the AWR can allow you to "fact check" what the testers say LoadRunner is doing.




On 6/18/18 13:28, Ashish Lunawat wrote:

Unfortunately the data modifications is quite small and not many sessions writing into the database. Here is one of the AWR, load profile sections look like.

Load Profile         Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~     ---------------   --------------- --------- ---------
             DB Time(s):               3.9               0.4      0.02   0.00
              DB CPU(s):               3.8               0.4      0.01 0.00
      Background CPU(s):               0.2               0.0      0.00 0.00
      Redo size (bytes):          63,873.2           6,606.3
  Logical read (blocks):          56,422.7           5,835.7
          Block changes:             271.0              28.0
 Physical read (blocks):           1,501.5             155.3
Physical write (blocks):              38.4               4.0
       Read IO requests:              15.5               1.6
      Write IO requests:               4.2               0.4
           Read IO (MB):              11.7               1.2
          Write IO (MB):               0.3               0.0
           IM scan rows:               0.0               0.0
Session Logical Read IM:
 RAC GC blocks received:              17.5               1.8
   RAC GC blocks served:              41.7               4.3
             User calls:           1,268.3             131.2
           Parses (SQL):              89.2               9.2
      Hard parses (SQL):               0.8               0.1
     SQL Work Area (MB):               9.7               1.0
 Logons:               2.1               0.2
         Executes (SQL):             255.8              26.5
Rollbacks:               0.0               0.0
 Transactions:               9.7

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer Hit   %:   99.93    In-memory Sort %:  100.00
            Library Hit   %:   99.13        Soft Parse %:   99.16
         Execute to Parse %:   65.13         Latch Hit %:   99.95
Parse CPU to Parse Elapsd %:   80.77     % Non-Parse CPU:   99.50
          Flash Cache Hit %:    0.00

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                   Total Wait       Wait   % DB Wait
Event               Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                         13.8K              97.4
log file sync              48,284      141.6       2.93    1.0 Commit
rdbms ipc reply             299,603      121.7       0.41     .9 Other
direct path read             13,064      114.4       8.76     .8 User I/O
DFS lock handle              37,522       30.4       0.81     .2 Other
db file scattered read              4,302       27.4       6.37     .2 User I/O

Thanks


On Tue, Jun 19, 2018 at 3:16 AM, Tim Gorman <tim.evdbt@xxxxxxxxx <mailto:tim.evdbt@xxxxxxxxx>> wrote:

    There is likely a big difference in overall conditions between
    when you run the query yourself versus how it is running out "in
    the wild" by LoadRunner.

    For example, in LoadRunner, are you also having other sessions (or
    the same sessions) also performing data modification on the
    table(s) that this query is scanning?

    If so, then a fair portion of your "consistent gets" are likely
    coming from undo segments, not from table/index segments, as the
    query executed under LoadRunner is having to rebuild the
    consistent image at the point-in-time when the query began while
    updates/deletes/merges are happening concurrently.

    In contrast, when you are running the query by yourself for
    testing with autotrace and tkprof, there are probably no
    modifications to the table(s) going on, so "consistent gets" are
    very simple and relatively inexpensive.




    On 6/18/18 12:48, Ashish Lunawat wrote:

        Hi, I have a query which when run through autotrace and tkprof
        shows me about 50,000 gets. But the same query when shot as a
        part of loadrunner performance testing causes about 1.2
        million consistent gets as seen in the AWR report. How is this
        possible?

         The database is running on a 2 node RAC with each node having
        32 cores. Tkprof shows this query takes about .8 seconds and
        causes 50K consistent gets while when shot through loadrunner
        it, causes both the RAC nodes to go as much as 100% CPU. When
        monitoring session waits I can see that there is a hot block
        contention happening on a table with about 19K rows and an
        index on this table. This query is responsible for about 94%
        of the gets and thus high cpu utilization.

         Any clues how to troubleshoot this issue?

        Thanks
        Regards,
        Ashish




Other related posts: