RE: Increasing row retrieving speed via net8

  • From: Timur Akhmadeev <Akhmadeev@xxxxxxxxxxxxxx>
  • To: "tanel@xxxxxxxxxxxxxx" <tanel@xxxxxxxxxxxxxx>, "grzegorzof@xxxxxxxxxx" <grzegorzof@xxxxxxxxxx>
  • Date: Thu, 12 Apr 2012 07:13:34 +0000

Hi
In addition to what Tanel suggests:

1. if you have repeating values in a fat column, you can try to make a 
(intermediate) sort. SQL*Net does a simple sort of compression on repeating 
column values (I think it was mentioned sometime ago on this list)
2. since you are doing JDBC, think about memory footprint on the client per 
arraysize: the driver allocates memory in fixed-size chunks on a per-connection 
basis with sizes 4K, 8K, ... 1G and the size depends on the table DDL. For 3 
VARCHAR2(1000) columns in UTF8 database and arraysize of 1000 a JDBC connection 
would require 4M memory footprint. Scale accordingly if you'll go parallel. 
What makes this worse is how JDBC driver caches fetch buffers. They can hang 
around for quite some time and can survive several full garbage collection 
cycles, AFAIK.

Regards
Timur Akhmadeev
-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Tanel Poder
Sent: Thursday, April 12, 2012 7:18
To: grzegorzof@xxxxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: Increasing row retrieving speed via net8

Hi Greg,
Your measured data shows that 30% of the session response time was spent

------------------------------------------------------------
---------------------
Active% | SQL_ID          | EVENT                               | WAIT_CLASS
------------------------------------------------------------
---------------------
    30% | 89km4qj1thh13   | SQL*Net more data to client         | Network
     3% | 89km4qj1thh13   | ON CPU                              | ON CPU


The above shows 30% of time is spent waiting for *SQL*Net more data to
client* waits - this can be "tuned" down somewhat by increasing the OS TCP send 
buffer size, which would allow more data to be on-the-flight in this TCP 
connection.

Only 3% is spent on CPU, so there's not much to do here :)

But what about the rest of the 67% of time - this was idle time for this 
session, in other words the *SQL*Net message from client *client wait event. 
This event can be "tuned" down by reducing the network roundtrips you do (so 
that there would be less time waited for the next application fetch command):

--------------------------------------------------------------
-------------------------------------
    SID, USERNAME  , TYPE,STATISTIC                         ,         DELTA,
    HDELTA, HDELTA/SEC
------------------------------------------------------
---------------------------------------------
   149, GG        , STAT, *SQL*Net roundtrips to/from **client *,
726,        726,      *145.2*


So, you were doing 145 network roundtrips per second.... each fetch (except the 
first one) is a separate SQL*Net roundtrip for example.

How to reduce SQL*Net roundtrips due to fetches - well, fetch less :) Either 
reduce the amount of rows returned or just increase the arraysize even further 
(note that larger fetch sizes require more PGA memory so you don't want to set 
the arraysize to 5000 for all your 1000 connections - but only for these 
connections which move a lot of data).


I'd do these things in this order:

1) Increase arraysize for these connections/apps which move a lot of data to 
the max (or until to the point you see the session idle time not get much lower 
anymore while the fetching is going on)

If that's not enough then you'll also need to optimize network throughput (that 
30% of time):

2) Read: http://en.wikipedia.org/wiki/Bandwidth-delay_product

3) Read: http://www.psc.edu/networking/projects/tcptune/

4) Configure your OS (max) TCP send/receive buffer sizes to at least match the 
data transfer throughput you want to achieve (given your network link 
capability and roundtrip time between the endpoints)

5) After that configure the Oracle SQL*Net SDU size to max (note that a common 
mistake is to spend much time tweaking Oracle SDUs while the OS TCP buffer 
send/receive sizes are still too low for required throughput with current 
roundtrip latency). So you should understand the stuff in 2/3/4 before tweaking 
#5.

By the way - It's good to see that others are also using snapper for measuring 
network throughput stuff ... Back in the days when I wrote snapper v1, it gave 
major help when diagnosing dblink throughput issues over WANs...

--
*Tanel Poder*
Enkitec Europe
http://www.enkitec.com/
Advanced Oracle Troubleshooting v2.0 Seminars in May/June 2012!
http://blog.tanelpoder.com/seminar/


On Wed, Apr 11, 2012 at 9:30 PM, GG <grzegorzof@xxxxxxxxxx> wrote:

> Hi,
>  I'm doing large table processing (row fetching) via Net8 , remote
> client using JDBC oracle client .
> Did some testing using snapper and two VMs with Oracle 11.2.0.2  .
> Got table T with 5M rows avg row length is 104 (its CATS from DBA_SOURCE
> and doing simple select * from t) .
> First I used BEQ (local connection :)) protocol as a reference :
>
> SQL> @snapper all 5 1 145
> Sampling SID 145 with interval 5 seconds, taking 1 snapshots...
> setting stats to all due to option = all
>


--
//www.freelists.org/webpage/oracle-l




________________________________
The information transmitted herein is intended only for the person or entity to 
which it is addressed and may contain confidential, proprietary and/or 
privileged material. Any review, retransmission, dissemination or other use of, 
or taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received this 
in error, please contact the sender and delete the material from any computer.
--
//www.freelists.org/webpage/oracle-l


Other related posts: