Re: Dataguard in data warehousing

  • From: Tim Gorman <tim.evdbt@xxxxxxxxx>
  • To: gogala.mladen@xxxxxxxxx, oracle-l@xxxxxxxxxxxxx
  • Date: Sun, 24 Nov 2019 12:22:45 -0800

As Mr Mischke stated in the blog post, iperf3 is designed to measure the maximum network throughput between the TCP stacks of the servers on either end.  It can open multiple connections in order to force artificially-generated data volumes across and back.  But there is an important distinction in that iperf3 is not simulating the streaming of data.  When multiple connections are streaming, received data must be buffered on receipt so that it can be re-sequenced to match original data.

Moreover, the sending side can either use a simple round-robin algorithm to write blocks to each of the network connections in turn, or it could expend more processing to write to the least-blocked connection;  there are pro's and con's to each.  On the receiving side, the throughput of each network connection, as well as the sending algorithm, affect buffering and re-sequencing before delivery.  It's pretty complex stuff.

Besides not being a streaming application, as Mr Mischke points out, iperf3 is also missing the impact of dataguard options such as SYNC/NOSYNC and AFFIRM/NOAFFIRM, all of which are simulated by oratcptest.

So iperf3 and oratcptest are not competing to measure the same metric, they measure and report different metrics...

 * iperf3 measures maximum available network bandwidth between servers
 * oratcptest simulates DataGuard redo shipping including options like
   SYNC/NOSYNC, AFFIRM/NOAFFIRM, etc


Both metrics complement each other.  Reviewing oratcptest without iperf3 can result in a less complete understanding, reviewing iperf3 without oratcptest can result in an incorrect understanding.

If iperf3 shows throughput considerably less than line speed, then the most common assumption is network congestion, and this can be verified or debunked by the network team, or by running the tests at different times.  However, consistent disappointing iperf3 results might indicate queuing due to misconfiguration in the TCP stack on one or both of the servers involved, which obviously should be resolved.  The consistent workload generated by iperf3 is useful for testing changes such as jumbo frames (i.e. MTU=9000), TCP kernel settings (i.e. rmem, wmem, etc), enabling or disabling options like LRO, TSO, etc.  Once we are satisfied with the results from iperf3, then we can be sure of having minimized the number of environmental issues oratcptest might encounter.

Hope this helps...

-Tim


On 11/24/19 7:14 AM, Mladen Gogala wrote:

Just to add to Frank's brilliant article "recover database nologged block" actually works by copying the blocks from the primary and is only available for active data guard, not for the run of the mill maximum performance data guard in mounted state. As for the network simulation, thank you, I've learned something. However, I am not sure what should it simulate, I would still prefer to measure my network throughput by iperf3 (https://iperf.fr/) That should give me a good idea whether the link can support my rate of change or not.


On 11/24/19 6:29 AM, Franck Pachot wrote:
Hi Ram,
For nightly direct-path loads you have the choice of logging (no nologging operations and force logging to be sure) or sync after the load (12.2 ADG makes it easy: https://blog.dbi-services.com/12cr2-recover-nonlogged-blocks-after-nologging-in-data-guard/). If you have an idea about the volume, you can simulate on your network with oratcptest (see https://dbamarco.wordpress.com/2016/11/25/benchmarking-data-guard-throughput/)
Franck.


Other related posts: