AND, not but to anything already the thread, it is often the case that stuffing
the inlists into a temporary table with a nomerge inline view join to the table
it filters simplifies the optimization (whether or not it is the fastest plan).
A further optimization is to toss on the “and between” the low and high values
in the inlist, with or without the temporary table.
One other thing: How are the lines getting to the server? I try to mention
that since experiencing a query that parsed at 14k baud plus line transmission
turnaround time times the length of the query text divided by 80, plus about 1
second when the text actually arrived at the server. (For this reason and
security I recommend constructing queries as stored procedures such that the
only thing sent from the client (if anything) is the list of parameters and the
name of the package and procedure. Your mileage may vary.) Eliminating any
transmission time of the inbound text of the sql is useful to take that out of
the solution consideration.
The same thing applies to return data (which I understand is not the current
case) when stuffing reports into files and transmitting the file is often
faster than the very fine client interface to someone’s cuneiform output device
over a bongo drum morse code network.
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Jonathan Lewis
Sent: Monday, January 17, 2022 9:06 AM
Subject: Re: Long Parse Time for a big Statement
If this is a common problem, and not susceptible to SQL Patch treatment, or the
addition of /*+ no_or_expand */ hints at the start of the query pattern, then
you could brute force it by setting "_no_or_expansion" to true if you can
identify the sessions that need it, or just set it system-wide. You should get
much faster optimization times - though the 55 way union all would have to be
optimized in 55 parts.
On Mon, 17 Jan 2022 at 12:59, Lothar Flatz <l.flatz@xxxxxxxxxx> wrote:
At one customer site we see generated statements, actually reports. The
parsetime for such a statement is over an hour, if it finishes at all.
It is possible we see "ORA-04031:" when we run out of memory in the
How big these statements are is hard to tell, since it depends on
formatting. With sql developer formatting i get in one typical example >
The statements are constructed relatively simple.
It seems to be a kind of change report where columns from different
tables are retrieved.
At the beginning is a big case statement where a meaningful name is
generated for a value followed by this values. I counted 7400 case
entries as per statement in one case.
I addition we have a number of big inlists.
All this is running against a union view of 55 Tables.
In other words: If i want to stress the parser I would construct a
statement exactly like this.
However, one hour seems to be a unrealistically long parse time.
Even though that statement needs to be rewritten, but this will take time.
I want to know if there is any quick fix like increasing the shared pool
a lot. (Which I can't test unfortunately any time soon due lack of memory).
Any ideas how to speed up the parse time?
Database version is 19.7. Shared Pool size is 20GB