Poor performance on bulk transfer across db link.

I'm reading many hundreds of gigabytes from a 9iR2 database to a 10gR2 database 
through a database link. Some of the tables I am reading are rather wide, with 
average column lengths of between 500 and 850 bytes.
   
  Performance appears to be constrained at the network level, with bandwidth in 
the order of 5Mbytes/sec on a gigabit network which demonstrates a 44MByte/sec 
ftp speed. There are no hops between the databases, with traceroute showing a 
direct server-to-server transfer.
   
  I've been googling around and came across 
http://www.fors.com/velpuri2/PERFORMANCE/SQLNET.pdf which explains the 
relationship between array size, row lengths, MTU, SDU etc..
   
  Statspack on the source db shows the following for a one hour snapshot:
   
  SQL*Net more data to client:
  1,336,548 waits
  0 timeouts
  2,885 total wait time
  2 Avg Wait (ms)
  2,069.0 waits/txn
   
  So firstly, am I right in thinking that the default arraysize for database 
links is 15 rows?
   
  If so, given that the MTU is 1500, the SDU is the default 2Kb, and the 
average row length is 600, is that data transfer rate of 4MBytes/sec 
surprising? If the MTU and SDU were adjusted skywards to the 15*600 range (say 
10kb) would I expect to get much of an improvement?
   
  Thanks in advance for any help -- I'm a network idiot.

Other related posts: