Re: Moving DR site from 30miles to 1600miles

  • From: "Ravi Gaur" <ravigaur1@xxxxxxxxx>
  • To: "Tanel Poder" <tanel.poder.003@xxxxxxx>
  • Date: Fri, 11 Apr 2008 15:07:44 -0500

Thanks all for the responses I got. That was indeed useful information --
our networking team is looking at the TCP window_size and in discussions w/
AT&T (oc3 is leased from AT&T, and as I understand it, it is not a
point-to-point router) .

- Ravi

On Thu, Apr 10, 2008 at 11:47 AM, Tanel Poder <tanel.poder.003@xxxxxxx>
wrote:

>  You might need a more decent SSH client if your current one is
> openSSH-based, which apparently does internally limit the send buffer sizes
> even if your server defaults / maximum are set higher to math WAN latencies.
>
> Check this link if you want a high-throughput version of SSH:
> http://www.psc.edu/networking/projects/hpn-ssh/
>
> 64kB of send buffer for 1600 mile "wide" WAN is too low if you want to
> achieve decent throughput. TCP as a reliable transport protocol needs to
> have retransmit capability, thus needs to keep all packets in send buffer
> until it's acknowledged by the other side, thus low buffer will start
> throttling your throughput if the network roundtrip time is long.
>
> You can actually calculate how large TCP buffers you need if you want to
> fill x MB of your Oc3 link. Google for "bandwidth delay product" the formula
> is very simple ( only three variables in the formula, link bandwidth, TCP
> buffer size and packet transmit time which you can roughly measure with ping
> or tnsping )
>
> --
> Regards,
> Tanel Poder
> http://blog.tanelpoder.com
>
>
>  ------------------------------
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Ravi Gaur
> *Sent:* Wednesday, April 09, 2008 23:39
> *To:* oracle-l@xxxxxxxxxxxxx
> *Subject:* Moving DR site from 30miles to 1600miles
>
> Hello all,
>
> We are planning to move our DR site which is currently about 30 miles from
> production site to ~1600 miles away. We currently have a 4-node RAC setup on
> our production site that houses 3 production instances (all 10.2.0.3 on
> Solaris 10). The SAN is Storagetek and we use ASM for volume management.
> In our testing, we are hitting issues in network transfer rates to the
> 1600-miles site -- a simple "scp" of 1GB file takes about 21 minutes.  We
> generate archives at the rate of approx 1GB/8minutes. The network folks tell
> me that the TCP setting is a constraint here (currently set to 64k
> window-size which Sysadmins here say is the max setting). We have an Oc3
> link that can transfer @ 150Mbps (that is what the networking team tells
> me).
>
> I've an SR open w/ Oracle and have also gone thru few Metalink notes that
> talk about optimizing the network from dataguard perspective. One of the
> notes I came across also talks about cascaded standby dataguard setup (one
> standby local pushes logs to the remote site).
>
> I'm trying to collect ideas how others are doing it under similar
> scenarios and if there is something we can do to utilize the entire network
> bandwidth that we have available to us.
>
> TIA,
>
> - Ravi
>
>

Other related posts: