Re: Poor performance on bulk transfer across db link.

  • From: "Alberto Dell'Era" <alberto.dellera@xxxxxxxxx>
  • To: david@xxxxxxxxxxxxxxxxxx
  • Date: Wed, 7 Nov 2007 22:15:44 +0100

David, it seems that the FETCH size used by the dblink is
sqlplus' arraysize, but for CTAS and inserts, it is not,
it is much bigger (probably "infinite") at least in my test case:

9.2.0.4:
create user link_user identified by link_user temporary tablespace
temp default tablespace users;
grant connect, resource, alter session to link_user;
create table link_user.t as select rownum x from dual connect by level <= 10000;
create trigger link_user.activate_trace
after logon on link_user.schema
begin
  execute immediate 'alter session set events ''10046 trace name
context forever, level 12''';
  execute immediate 'alter session set events ''10051 trace name
context forever, level 1''';
end;
/

10.2.0.3 (using 11.1.0.6 sqlplus):
create public database link target_db connect to link_user identified
by link_user using 'oracle9i';

select * from t@target_db;
(default sqlplus arraysize of 15)
a series of
FETCH #1:c=10015,e=8517,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=4,tim=7386308205
----
set arraysize 147
select * from t@target_db;
a series of
FETCH #1:c=0,e=4611,p=0,cr=2,cu=0,mis=0,r=147,dep=0,og=4,tim=7815077002
----
set arraysize 156
create table local_t as select * from t@target_db;
FETCH 1:c=290418,e=384630,p=0,cr=16,cu=0,mis=0,r=9999,dep=0,og=4,tim=7951647605
----
set arraysize 156
insert into local_t(x) select * from t@target_db;
FETCH #1:c=10014,e=18096,p=0,cr=2,cu=0,mis=0,r=666,dep=0,og=4,tim=8125609104
FETCH #1:c=50072,e=97190,p=0,cr=9,cu=0,mis=0,r=5333,dep=0,og=4,tim=8125756847
FETCH #1:c=60086,e=74140,p=0,cr=7,cu=0,mis=0,r=4000,dep=0,og=4,tim=8125899128
----
set arraysize 156
insert /*+append */ into local_t(x) select * from t@target_db;
same as insert above
(note: I haven't showed the first FETCH with r=1 in all cases)

I have also increased the row length:
create table  link_user.t as select lpad (rownum,1000,'x') x from dual
connect by level <= 10000;
And the results have been the same (actually the inserts both showed r=9999).

For CTAS and inserts, the wait event has been always
WAIT #1: nam='SQL*Net more data to client' ela= 59 p1=1413697536 p2=2022 p3=0
where p2 (number of bytes being sent) almost always around 2000 plus/minus 30,
whatever the row length.

I'm myself network challenged; anyway 2000 is close to default SDU=2K,
don't know whether there's any relation though.

Windows XP, sqlplus, source and target db on the same machine.

HTH
Alberto

On 11/7/07, David Aldridge <david@xxxxxxxxxxxxxxxxxx> wrote:
> I'm reading many hundreds of gigabytes from a 9iR2 database to a 10gR2
> database through a database link. Some of the tables I am reading are rather
> wide, with average column lengths of between 500 and 850 bytes.
>
> Performance appears to be constrained at the network level, with bandwidth
> in the order of 5Mbytes/sec on a gigabit network which demonstrates a
> 44MByte/sec ftp speed. There are no hops between the databases, with
> traceroute showing a direct server-to-server transfer.
>
> I've been googling around and came across
> http://www.fors.com/velpuri2/PERFORMANCE/SQLNET.pdf which
> explains the relationship between array size, row lengths, MTU, SDU etc..
>
> Statspack on the source db shows the following for a one hour snapshot:
>
> SQL*Net more data to client:
> 1,336,548 waits
> 0 timeouts
> 2,885 total wait time
> 2 Avg Wait (ms)
> 2,069.0 waits/txn
>
> So firstly, am I right in thinking that the default arraysize for database
> links is 15 rows?
>
> If so, given that the MTU is 1500, the SDU is the default 2Kb, and the
> average row length is 600, is that data transfer rate of 4MBytes/sec
> surprising? If the MTU and SDU were adjusted skywards to the 15*600 range
> (say 10kb) would I expect to get much of an improvement?
>
> Thanks in advance for any help -- I'm a network idiot.


-- 
Alberto Dell'Era
"the more you know, the faster you go"
--
//www.freelists.org/webpage/oracle-l


Other related posts: