Re: Performance tuning of a system

  • From: Martin Berger <martin.a.berger@xxxxxxxxx>
  • To: "Kellyn Pot'vin" <kellyn.potvin@xxxxxxxxx>
  • Date: Mon, 10 Sep 2012 09:00:09 +0200

Kellyn,

even not tested I am _verry_ sure from the moment the ASH and/or AWR
snapshot intervals are manipulated, all of Oracles standard reports
and also EM pages goes crazy.
This is one more reason _NOT_ to do this without any real need and to
be aware of it's real danger.

Still it's possible and if done based on a well made decission, I'm fine.

I'm somehow suspicious about
> "...the metric for flushing this was carefully calculated to work with the 
> rest of the design..."

as this would mean 'one size fits all' comes true.
More likely I'd call it a good compromise between many different needs.
And, as a human, it's just easier to think '1 ASH sample - 1 second; 1
AWR sample - 10 seconds' instead of
'1 ASH sample - 0.653 seconds; 1 AWR-sample - 4.935 seconds'. Every
back-of-an-envelope calculation would be much more complicated for
humans!

Martin

On Sun, Sep 9, 2012 at 11:22 PM, Kellyn Pot'vin <kellyn.potvin@xxxxxxxxx> wrote:
> I don't know the answer to this, but I would be interested to know-  If you
> DID change the parameters for how often ASH flushed to AWR, would the ASH
> and AWR data calculations still be correct in the reports?  Would it be
> essential to query the distinct ASH data directly to perform correct
> analysis?
>
> I was surprised to see this, as in the last "Oracle Supported" ASH/AWR
> presentation on this subject, the guys were quite clear that we should NOT
> be changing how often it was flushed and "...the metric for flushing this
> was carefully calculated to work with the rest of the design..."
>
> Something to think about and query...
>
> Thanks!
>
>
> Kellyn Pot'Vin
> Senior Technical Consultant
> Enkitec
> DBAKevlar.com
> RMOUG Director of Training Days 2013
>
>
>
> ~Tombez sept fois, se relever huit!
> ________________________________
> From: Martin Berger <martin.a.berger@xxxxxxxxx>
> To: s.cislaghi@xxxxxxxxx
> Cc: kyle Hailey <kylelf@xxxxxxxxx>; Oracle L <oracle-l@xxxxxxxxxxxxx>
> Sent: Sunday, September 9, 2012 2:50 PM
> Subject: Re: Performance tuning of a system
>
> Ste,
>
> do you have any chance to pin down a 'transaction' for sure?
> Something like dbms_application_info is setting?
>
> My problem:
> * curently we do not know which part of your 'transactions' is slow.
> * it's rare and can not be reproduced at will
> * a target of 5 sec, and even a 'bad' query of 30 sec is hard to find
> in dba_hist_active_session_history - you would need to sample
> v$active_session_history manually immediately after the issue raises.
>
> Doug Burns wrote about ASH and AWR sampling can be manipulated:
> http://oracledoug.com/serendipity/index.php?/archives/1395-ASH-and-the-psychology-of-Hidden-Parameters.html
> But I'd not like to go that way in production without good reason and
> support.
>
> That's why I come back to identify the bad 'transactions'. If you can
> pin those down, it should still give enough samples over some days so
> they can be used in statistical methods?
>
> Do you have a tracking table where all 'transactions' - and their
> timing is tracked by the application?
> If you have, you can create a trigger there and do your own 'flush
> v$active_sesion_history into persistent table'
>
> hth,
> Martin
>
>
> On Sun, Sep 9, 2012 at 6:16 PM, Stefano Cislaghi <s.cislaghi@xxxxxxxxx>
> wrote:
>> Hi Kyle,
>>
>> your queries are very interesting and I will check them ASAP. Anyway I
>> gained some more information about the problem.
>> The constraint is to have all transactions performed in less than 5
>> seconds. Transaction is not a single statement but, from the
>> application point of view,
>>
>> - few selects into the db
>> - elaborate data (by appserver)
>> - run 2 or 3 store proc
>>
>> So we know that almost 99% are under 2 sec but sometimes others are
>> above 5sec, 20/30sec. DB is not directly incriminated but it's been
>> asked to provide, if any, any information for debugging. IMHO I think
>> it is quite hard because the 'long' transactions happen everyday in
>> totally different time windows; I do not have enough data, even with
>> AWR, ecc to say what was happening for instance yesterday at 16:44:56
>> sec where a transaction took 33sec instead 2. From the application we
>> know that during the long transaction no abnormal load was done on db,
>> I mean no abnormal number of transaction in the same second. Disk
>> speed? maybe, but how to get the disk queue or any other interesting
>> info from the DB?
> --
> //www.freelists.org/webpage/oracle-l
--
//www.freelists.org/webpage/oracle-l


Other related posts: