Re: Increasing row retrieving speed via net8

So what does snapper say now? If you're still waiting 67% idle for the next
fetch call from the app, there's not much you can do as I wrote previously.
The biggest gains come from fetching more at a time (or reduce the network
roundtrip time and application think time).
With 8 sessions in parallel you're fetching 8x more per second, so while
some sessions are waiting for the next fetch call to arrive, others are
transmitting.

Also, you set the net.core_wmem_default to 256k. The wmem (write-to-socket
mem looked from the server side) is the one where packets are buffered up
when sending data back to client. So if you want to push data from server
to client (in other words, the client pull data from the server), you'll
need to allow the wmem to be higher.

But your Oracle level RECV_BUF_SIZE and SEND_BUF_SIZE values are 64k only -
so your Oracle processes resize the TCP send buffer sizes for their sockets
down - so having the larger OS setting doesn't help.

You can set the SEND_BUF_SIZE to 1M and see what happens then.

But again, as I said, the *SQL*Net more data to client* wait was only 30%
in your previous measurements, so you should fix where the 67% went first -
fetch more at a time.

-- 
*Tanel Poder*
Enkitec Europe
http://www.enkitec.com/
Advanced Oracle Troubleshooting v2.0 Seminars in May/June 2012!
http://blog.tanelpoder.com/seminar/


On Thu, Apr 12, 2012 at 9:47 PM, GG <grzegorzof@xxxxxxxxxx> wrote:

> Thanks for all valuable comments I did as follows :
>
> 1. Set
>
> DEFAULT_SDU_SIZE=32767
>
> in sqlnet.ora both client and server and
> check this is negotiated properly via net8 trace and it is .
>
> 2. Set on both linux machines:
>
>   net.core.rmem_default = 1048576
>   net.core.rmem_max = 1048576
>   net.core.wmem_default = 262144
>   net.core.wmem_max = 1048576
>
> 3. set in sqlnet.ora on both server and client machine:
> RECV_BUF_SIZE=65536
> SEND_BUF_SIZE=65536
>
>


--
http://www.freelists.org/webpage/oracle-l


Other related posts: