Re: Poor performance on bulk transfer across db link.

  • From: kathy duret <katpopins21@xxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Wed, 7 Nov 2007 20:43:09 -0800 (PST)

How are you doing the bulk transfers?  
   
  It sounded like a sql query ...
   
  Materialized views logs on the 9i just have the changes 
   
  The MVs on the 10g would have the tables and then you would run jobs to 
refresh them every so often.
   
  You need to describe more what you are doing and how often you have to do 
it... blah, blah, blah so we can have a better understanding of what you are 
trying to accomplish.
   
  you also could try a bulk collect process.  
   
  I don't know if you can do streams from 9i to 10G.  I also understand streams 
is bugging in 9i and a big more challenging to set up than MVs.
   
  K

David Aldridge <david@xxxxxxxxxxxxxxxxxx> wrote:
  Maybe I misunderstood, Kathy, but I'm already using db links for this.

kathy duret <katpopins21@xxxxxxxxx> wrote:     If you are doing this on a 
regular basis you might want to set up Materialized views
   
  This is what I do now.
   
  The logs are on my 9i db and the views are on a 10g.
   
  You really should be doing bulk transfers across a db link (IMHO)
   
  It is faily fast to set up and I haven't had any issues
   
  Kathy

David Aldridge <david@xxxxxxxxxxxxxxxxxx> wrote:
    I'm reading many hundreds of gigabytes from a 9iR2 database to a 10gR2 
database through a database link. Some of the tables I am reading are rather 
wide, with average column lengths of between 500 and 850 bytes.
   
  Performance appears to be constrained at the network level, with bandwidth in 
the order of 5Mbytes/sec on a gigabit network which demonstrates a 44MByte/sec 
ftp speed. There are no hops between the databases, with traceroute showing a 
direct server-to-server transfer.
   
  I've been googling around and came across 
http://www.fors.com/velpuri2/PERFORMANCE/SQLNET.pdf which explains the 
relationship between array size, row lengths, MTU, SDU etc..
   
  Statspack on the source db shows the following for a one hour snapshot:
   
  SQL*Net more data to client:
  1,336,548 waits
  0 timeouts
  2,885 total wait time
  2 Avg Wait (ms)
  2,069.0 waits/txn
   
  So firstly, am I right in thinking that the default arraysize for database 
links is 15 rows?
   
  If so, given that the MTU is 1500, the SDU is the default 2Kb, and the 
average row length is 600, is that data transfer rate of 4MBytes/sec 
surprising? If the MTU and SDU were adjusted skywards to the 15*600 range (say 
10kb) would I expect to get much of an improvement?
   
  Thanks in advance for any help -- I'm a network idiot.

  __________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



 __________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Other related posts: