IF you are allowed to re-tool the view, you *might* put in some shrubs for a
two-level effect:
In the innermost bit, use just the “to date” part so is null or is <= sysdate+1.
Then discard the presumably small number of “not yet valid” rows by selecting
all you need from that view with the predicate from_date <= sysdate.
You might also redo the logic with the assertion of lnnvl, which is a published
function. That seems to fix up for nulls, at least in versions I have tested
through 12.1. Probably something about it knowing it has to look at nulls to
get the answer to the function, so not using the 5% thingy, but that is just a
guess. I can’t prove it from the customer data cause I can’t show you and I’m
far too lazy to build a test set. (I have AALS, Age Aquired Lazyness Syndrome.)
Figuring out what to plug into lnnvl to get the same result set is likely to
give you a migraine, anyway, and the nested result set in the view should work,
just a bit less efficiently.
I believe this will approximately get you the reasonable cardinality erring on
the side of a modest number of extra rows (never omitting any you do need) and
then filtering those. This is similar to the “manual bloom filters” we used to
deploy in Oracle v5 when some concatenated indexes didn’t work properly.
While applying filters as soon as possible and ideally before components of row
sources are joined is usually a good idea, if the CBO gets its panties in a
bunch devising a for sure gets the right answer plan, and especially when your
total row source is modest anyway, filtering late can be the optimal you can
actually experience.
I hope this helps. It was triggered by you mentioning that you have a view. In
general, if you control the DDL but not the code, that is important still an
important opportunity to help the CBO do something consistently good.
Again,
Good luck!
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Laurentiu Oprea
Sent: Wednesday, October 28, 2020 11:27 AM
To: Jared Still
Cc: l.flatz@xxxxxxxxxx; ORACLE-L (oracle-l@xxxxxxxxxxxxx)
Subject: Re: improve cardinality estimation
I definitely would feel very nervous to use sql translation, clearly have no
intention to use it but it is good to know about it ( I didn't knew about it
before so thanks) .
At this point I managed to save the situation using a couple of baselines (the
same path is good "enough" for all kinds of situations (bind values) and its
duration for this particular section is1 minute compared with 1 hour so this
makes people happy). Even more with some luck, beginning of next year I`ll have
the code updated similar to what Jonathan suggested. Thanks all for your input.
It was very useful and helpful.
În mie., 28 oct. 2020 la 16:37, Jared Still <jkstill@xxxxxxxxx> a scris:
and you should feel uneasy about sql translation
sql translation could be used for all kinds of nefarious things.
On Wed, Oct 28, 2020 at 01:29 Lothar Flatz <l.flatz@xxxxxxxxxx> wrote:
My apologies if that was already mentioned. (This thread is rather long).
sysdate <= sysdate + 1 always holds true. I do not think the optimizer is
capable of analysis at that level which goes beyond a straight logical calculus
transformation.
You could use some constant "where sysdate between trunc(col_from_date) and
trunc(nvl(col_to_date,to_date('12312999','MMDDYYYY')))".
This approach has his disadvantages too . It can lead to underestimation as it
extend the time window unrealistically. It might hold in your case. That's a
matter of testing.
Your rewrite is cleaner and I believe that is the preferred solution.
Instead of changing the code there is always the option of the sql translation,
although I would feel a bit uneasy about it.
http://kerryosborne.oracle-guy.com/2013/07/13/sql-translation-framework/
Regards
Lothar
Am 28.10.2020 um 08:39 schrieb Laurentiu Oprea:
Overall, I start to suspect that the only solution is code re-write . But not
sure if this is a legitimate question: if there is an equivalent code shape
should oracle rewrite the query behind the scenes with that shape? Or in this
situation the "sysdate" scares him being non-deterministic, even if the code
with the different shape is not influenced by non-deterministic character of
sysdate?
În mie., 28 oct. 2020 la 07:08, Laurentiu Oprea <laurentiu.oprea06@xxxxxxxxx> a
scris:
Dynamic sampling 11 was the nightmare until not so long. Due to a bug the
automatic determined value was 11. It was generating a parsing time of over 5
minutes to parallel queries and when downgrade was happening due to various
reasons I could see even 30 minutes of parsing (there were even extreme cases
of 2-3 hours).
Thanks for suggestion, although indeed solves the cardinality issues in my case
the side effect is crazy parse time for some other more important queries.
În mie., 28 oct. 2020 la 04:12, Mladen Gogala <gogala.mladen@xxxxxxxxx> a scris:
Is this an OLTP instance? If it is not, you may try with
OPTIMIZER_DYNAMIC_SAMPLING=11. If that is an OLTP instance, using
dynamic sampling will increase parsing time and lower the performance,
albeit not drastically.
On 10/27/20 11:31 AM, Laurentiu Oprea wrote:
Hello,
I found a query with a very bad execution plan due to low
cardinality evaluation of below where clause
where
sysdate between trunc(col_from_date) and
trunc(nvl(col_to_date,sysdate+1))
Is there any way I can improve the estimate (like extended stats, etc) ?
Thanks for your help.