RE: LOB Operation and SQL*Net Message From Client and cursor #0

  • From: Jonathan Lewis <jonathan@xxxxxxxxxxxxxxxxxx>
  • To: 'Oracle-L' <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 1 May 2013 10:08:17 +0000

You may have covered this in an email I missed, but if you're testing from 
SQL*Plus then

a) the arraysize is ignored if you actually fetch the LOB

b) if you set longchunksize to be less than long (the defaults are 80 each) 
then you get multiple LOBREAD passes for each lob fetch, for example if you set 
long 20000 and leave longchunksize to 80, and have LOBs of 4,000 bytes you will 
see 51 (seems like one to many) calls to LOBREAD.  The LOBREAD calls can be 
very quick - which can lead to all sorts of unaccounted time and granularity 
errors at all the different stages of monitoring. Possibly the application has 
a similar pair of configuration options.

Regards
Jonathan Lewis


________________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [oracle-l-bounce@xxxxxxxxxxxxx] on behalf 
of Larry Elkins [elkinsl@xxxxxxxxxxx]
Sent: 30 April 2013 23:10
To: 'Mark W. Farnham'; 'Oracle-L'
Subject: RE: LOB Operation and SQL*Net Message From Client and cursor #0

> So a lot of the time is spent on the line turn-around, and both tools count 
> that latency against the
> other guy.

Correct, the guy using opnet is saying that turn from the app server layer to 
the db and back is all on the DB, next to nothing on
the network itself, and not much with regards to client think time on the 
client. And of course tracing is saying mostly sql*net
message from client, the time to stuff the data in the tcp buffer, go down the 
wire, client think time, and come back up. And it's
not my place to say his tool is wrong and 10046 is right ;-) Maybe they both 
are right and it's in a virtualization layer that is
hit after the NIC. It's something we are considering, though I don't have the 
skills or knowledge to dig into that, someone else
will have to go there.

>
> Can you use a bigger packet?
> Can you try placing your client process directly on the dbserver?

Can't get the client code there, but taking the SQL statement in question, the 
one driving all the elapsed time, I run SQL*Plus on
the server, both through SQL*Net and bypassing SQL*Net, array size 100 like 
they do, and "set autotrace trace stat" to discard the
rows and not eat time rendering them on the screen, and the processing time 
drops to 1/10 what it was within the application, with
the actual DB time about the same. Unaccounted for time was also similar, 
though percentage wise constituted a much larger slice of
total time.

When repeating with SQL*Plus on a desktop, the total timings came more in line 
with what we see with the application, though the
unaccounted for time went up compared to SQL*Plus on the server. The fetch 
calls match up with the array size, it's just those OPI
level trips where we add a lot more trips into the mix, and don't show up on 
the cursor's fetch count.

>
> One last thing: Usually tracing on is a minor side effect, but if there is a 
> throughput challenge on
> your trace/output directory and your only wait is a long stream of message 
> waits, that could add up to
> significance. So carefully compare the times traced and untraced to rule this 
> out.

Your comments did make me think about going back and looking at unaccounted for 
time (something, maybe Tanel posting, about those
OPI calls work not being counted anywhere). And timing is the same without 
tracing.--
//www.freelists.org/webpage/oracle-l


Other related posts: