90K rows in one fetch?! That sounds like someone took one look at a
parameter and decided it needed to be as high as possible. No wonder you do
so much “more data” waits, Oracle couldn’t possibly put all those rows in
one packet (unless they were very thin, and duplicated!)
The wait you see there is the time between the Java application getting all
the rows it fetches and it being ready to ingest the next fetch. Check the
application logs for what it was doing at the time.
I’m not sure if you can do row prefetches with JDBC but if you set the
fetch size set at a more reasonable level, the application would be able to
start processing the rows sooner while your DB is prepping the next fetch.
90K rows.... wow!
On Mon, 14 Oct 2019 at 17:31, kunwar singh <krishsingh.111@xxxxxxxxx> wrote:
I got the 10046 trace and found there is one single long wait , which
happens sometimes and it causes the issue.
14572 WAIT #140333333333333: nam='SQL*Net more data to client' ela=
2100000012 driver id=675562835 #bytes=8100 p3=0 obj#=3434343
At other times the wait time is like 25 micro seconds, but during the
issue a single wait goes upto 2100 seconds like shown above.
JDBC Thin Client is used at the client side and FETCH calls shows more
90k rows fetched in a single call , so i assume it is set to very high or
I am still check on other points that you mentioned.
On Wed, Oct 9, 2019 at 2:50 PM Andy Sayer <andysayer@xxxxxxxxx> wrote:
SQL live Monitor reports use ASH behind the scenes, they ignore sql*net
message from client as it’s classified as an idle wait.
More data to client is classified as active as the client has said they
definitely want this data but some buffer in the middle is not big enough
to fit it all in.
I would suggest looking at how much data the client is requesting and
consider what it’s trying to do with it. I would guess either they’re
missing some pagination (ie do one fetch only) or it’s doing some
aggregation (etc) logic before the end user sees it - move that to the DB
if you can.
Hope that helps,
On Wed, 9 Oct 2019 at 19:40, kunwar singh <krishsingh.111@xxxxxxxxx>
Good points. I use the sql monitor active report and
I have seen cases where sql*net message from client are missing from sql
monitor report and DBA_HIST_ACTIVE_SESS_HISTORY. but in this case i do see
few samples for "SQL*Net message from client" as well.
I am checking on other points that you have mentioned along with other
useful points mentioned by Dominic, Mark .
On Wed, Oct 9, 2019 at 1:50 PM Andy Sayer <andysayer@xxxxxxxxx> wrote:
What are you using to show “ most of the time is being spent on
"SQL*Net more data to client"”? Is it something that will ignore sql*net
message from client?
A standard 10046 trace will show you everything you really need here -
rows returned in a fetch and individual wait times for this event and how
often they occurred for a fetch.
What’s client driver being used? How is the fetching configured?
On Wed, 9 Oct 2019 at 18:34, Mark W. Farnham <mwf@xxxxxxxx> wrote:
and (not but) are different client programs or client locations the
“catchers” of the data?
if different, seeing if the slow ones are consistently of a type could
be useful in diagnosis and prevention.
a VLAN with no real QOS (even if configured and promised) can also
correlate with this, especially if the physical LAN is used for video.
*From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Dominic Brooks
*Sent:* Wednesday, October 09, 2019 1:16 PM
*Subject:* Re: How to trace the root cause of "SQL*Net more data to
client" for only few executions of a query
What arraysize is being used ? What’s the average row size?
Sent from my iPhone
On 9 Oct 2019, at 17:32, kunwar singh <krishsingh.111@xxxxxxxxx>
Our customer have one job which runs daily and one of the sqls which
executes like 500 times had 5 odd executions where it runs slower and most
of the time is being spent on "SQL*Net more data to client". Normal run
times are like few seconds and the longer run times are like 1 hour or so.
For other executions we dont see that wait event being a problem.
What kind of tracing can we do to go in depth of the issue. I want to
be sure before i suggest some tuning at SDU sizing level etc.
Because we cannot predict when the slow run of the query will happen
we cannot wait for it and then run strace etc exactly at that time
,Basically we will have to automate the tracing step.