Have a suite of regression tests ordered by most important business function
paired with numbers regarding the desired average response time and maximum
allowable response time. (See Cary Millsap’s writings for a bit more
sophisticated quantification of service level fulfillment by percentage of
results within ranges, if you want to do a little better than maximum allowable
Make sure any transactional regression tests can and do end in “rollback”
unless you have a pool of pending batch transactions to run or you don’t care
whether they are actually committed on your performance test database.
After you make the change on your full sized performance regression test
platform, run the regression test 10-20 times recording the execution time for
See if you still comply with your service level desires.
See if you are better or worse than your previous values.
Notice that there is implicitly a full sized test system. IF your entire system
is constructed of time based partitioning, you might be able to produce
reliable regression test results on less than a full sized database with just
several days, weeks, months, or years present (depending on the nature or your
queries and transactions.) For financial systems quite often the magic number
is 28 months. And I leave it to the reader to figure out why.
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On
Behalf Of Ethan Post
Sent: Friday, April 21, 2017 1:29 PM
Subject: How do you validate the impact of a change on performance?
This is for my education only and I am not facing any issues here, I have my
own methods for this (home grown), but apart from my own methods have no idea
what Oracle delivers these days and what others are doing.
When a change is made, a config change, storage change, stats collection change
and so on, how do you know the DB wide impact of the change in terms of this?
Query ID | Avg. Elapsed Time/Execute Before Change | Avg. Elapsed Time/Execute
After Change | % Change
Add to above total times, perhaps some charts showing # of outliers and things
You should be able to paint a very clear picture of the performance change in
the system and narrow down the bulk of the impact of the change to a hand full
of statements (this is my experience usually anyway).
So asking, how do you do this? Does OEM provide this type of reporting
automatically or do you need to set it up? Do most people have this set up? Or
if no set up is required is the navigation to these screens required easy to
find and known by most?
This list is largely comprised of DBA’s in a more knowledgeable category than
those who have never heard of this list, in my experience) so perhaps a sample
here is not fair representation of the larger world, my experience from
consulting at a # of corps is the many have no idea how to get this data and
just try to “get a sense” of the impact of the change, or only looks at a
particular query or two, or uses data from sources which are really not set up
to provide this type of information in a format that can be easily understood
Thanks for sharing if you have time.