As far as recommended, I recommend it ;) I’ve done this since upgrading to
12.1, every application (not ad hoc) statement that runs multiple times
will be using a sql plan baseline. I’ve only had about 3 statements that
have changed behaviour in an upgrade. I’m on annual leave right now so I
can’t grab a figure but I would say across the applications (both home
grown and 3rd party) this covers 1,000s of different statements. I have
only seen negative impact where heavy hard parsing was being done and the
optimizer couldn’t use the old plan baseline (accepting a usable baseline
removed the impact).
Before we did this, we needed extra time to make sure every statement was
hit and performing the same (under enough executions). Without baselines in
place, a huge chunk of your statements will use a different plan, a lot of
the time it will be only a tiny change but you can’t tell until you pull up
the plans side by side and even then you’re going to want to do proper
analysis. This simply takes a long time and at the end of it you probably
are still only 90% confident.
SQL plan baselines before the upgrade means that in your test environment
all you need to do is make sure the baseline is still being used. That is
simply a matter of checking v$sql. Same plan = same performance profile.
You now only need to look at these statements and see why they can’t use
their old plan (usually an optimizer bug fix).
Of course, there is one tiny exception due to plan hash values not taking
into account predicate lists but I have only ever come across this once and
that was on a statement which already required a heavy helping hand. See
https://jonathanlewis.wordpress.com/2018/11/15/num_index_keys/ for more
info.
If application code changes then those changes are tested. That should
happen independently of an Oracle upgrade.
Thanks,
Andy
On Sat, 13 Nov 2021 at 20:03, Lok P <loknath.73@xxxxxxxxx> wrote:
Thank you Andy. In existing databases , our thought process was to not
create any SQL profile/baseline/patches unless until it's really required.
As they will get detached in case of application code change and give us
surprises. So we mostly try to fix statistics wherever possible, so that
optimizer would be able to do the best decision for us.
But I understand , in real life this doesn't happen always and we endup
creating baseline/profiles in case certain plan changes where it's
unavoidable and not much options to chage code. But as a long-term we
suggest hints and then drop the profile/baselines.
But in this upgrade case as you suggest creating baseline for all the SQL
queries in the database . So wondering if that is recommend way and will
not have negative impact in long term? Or may be I am not fully able to
understand your point. Can you explain bit more about this?
And one of the challenge we normally face wrt test environment is the
absence of volume of data and thus stats as it's in production. So many a
time that test doesn't really mimic production scenario. And I believe the
only exact test would be to have a real application test environment with
capture and replaying the exact production load. Please correct me if wrong.
On Sun, 14 Nov 2021, 12:44 am Andy Sayer, <andysayer@xxxxxxxxx> wrote:
If you want to retain performance then that is primarily going to mean
doing the same execution plans (whether or not they could be improved by
Exa magic). I would recommend gather sql plan baselines for every statement
that gets executed by your application.
Once you’ve upgraded, you can start with sql plan evolution to improve
any plans that can be improved. Note that this is now automatically done so
you will want to check the parameters for this make sense.
Obviously, number one must have requirement is a test environment that
can be properly signed off by all relevant people in your org before you
start thinking about production upgrades.
Thanks,
Andy
On Sat, 13 Nov 2021 at 18:46, Lok P <loknath.73@xxxxxxxxx> wrote:
Hello Listers, With respect to having a safe upgrade(say from 11.2 to
19C) or migration(From HP to Exadata) experience with minimal performance
issues. Is there any guideline we should follow like setting up exadata
system stats in case the target database is going to be exadata, Or
verifying dictionary stats/table stats etc in a certain way. Want to know
experts' views, if there are any such guidelines?
Regards
Lok