Re: Re: sql run time

  • From: Ram Raman <veeeraman@xxxxxxxxx>
  • To: l.flatz@xxxxxxxxxx
  • Date: Mon, 9 Nov 2015 18:18:09 -0600

I collected the stats on one of the tables involved friday afternoon and
the performance of the query was back to the way it was before. It was
almost a shot in the dark, but it seemed to have worked. There are about 5
tables in the query, some have 10 or 20 rows, one has a few thousand, with
the biggest having 200K rows. I collected the stats for the 200k row table
at 50% and the performance immediately went acceptable. Before that, I
tried collecting stats at 10%, 20%, but they did not work. Last week the
stats were at about 5% for this table.

I tried comparing the plans, the PHV were the same for both 'good' and the
'bad' plans and the plans looked same to me last week, but look different
today - maybe I missed something in the hurry. There were only 20 lines in
the plan. The good plan was showing as 0.00 CPU second per exec in the sp
report. The bad one was showing up as 0.03 seconds. The SQL was being
executed about 200K times per hour.

When it was running unacceptable, I captured some bind values and ran the
SQL myself and for all the cases I captured, it returned zero rows; what an
irritation when you are executing it several million times a day. As an
aside, the db file seq reads are back to 12 ms after the performance got
back to the previous level.

Thanks Iggy, Dominic and Lothar and everyone else helped. It will be nice
to know why the collection of stats worked.


On Fri, Nov 6, 2015 at 1:48 AM, l.flatz@xxxxxxxxxx <l.flatz@xxxxxxxxxx>
wrote:

Execution plans can change over time and switch from efficient to
inefficient. Often an misestimation of cardinality is the cause.
Quite often such misestmatuion was always there and geting worse with data
growth. At some point it breaks.
I rarly deal with past plan though. I want to know what is going wrong NOW
and how I can stop it.
On SE, runtime stats would be the measure of choice. Easy and fast to come
by.

alter session set statistics_level=all;
run your statement
select * from table (dbms_xplan.display_cursor(null,null,'RUNSTATS_LAST'));

If strange waits are involved a trace could be helpfull too.

----Ursprüngliche Nachricht----
Von : dombrooks@xxxxxxxxxxx
Datum : 06/11/2015 - 08:14 (GMT)
An : iggy_fernandez@xxxxxxxxxxx, veeeraman@xxxxxxxxx,
oracle-l@xxxxxxxxxxxxx
Betreff : Re: sql run time


It depends (doesn't it always?).
Potentially if your physical IO time was slower, your SQL could be running
for longer therefore you might need to do more work to get a read
consistent picture (more buffer gets) which then makes it even slower, etc
- circular.


On 6 November 2015, at 01:43, Iggy Fernandez <iggy_fernandez@xxxxxxxxxxx>
wrote:

I don't see how changes in db file sequential read time can impact buffer
gets

Here are some ways buffer gets can change


- Changes in execution plan
- Changes in data
- Changes in bind variables (in other words, there is skew in resource
consumption depending on the bind variables)
- Effects of the Oracle consistency schema, block cleanout, etc.


If you don't have AWR, hopefully you are using Statspack. The sprepsql
script will give you insight into the execution history.

Iggy


------------------------------
Date: Thu, 5 Nov 2015 19:30:57 -0600
Subject: sql run time
From: veeeraman@xxxxxxxxx
To: oracle-l@xxxxxxxxxxxxx

Hi,

We have a 11.2 SE db that is experiencing big spikes in CPU. Upon further
investigation I see one SQL that has become resource intensive in terms of
buffer gets and CPU time. The SQL was running fine till few days ago even
though it was being executed several tens of 1000s of times in the half hr
reporting period. Per the old reports, this sql has been executed that many
times always. But something changed few days ago that this SQL shot up in
terms of # of buffer gets per the report. I tried checking the v$sql for
different versions today, but I see only one version of the SQL (remembered
that from Jonathan's advise last time). Is there a way with SE to see the
equivalent of awrddrpt?

also, I see the dbfile seq read double in value suddenly in the newer
reports to about 26ms. Even though that is not a great number, the previous
average was about 12/13ms. I am thinking the increase in the DBF SEQ RD
time could not cause the increased buffer gets. Am I right in my assumption.

Thanks,
Ram

--







--

Other related posts: