[haiku-development] Networking speed update

  • From: Oliver Tappe <zooey@xxxxxxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Tue, 29 Jul 2008 12:42:08 +0200

Hi again,

here's an update on the current speed of haiku's net stack:

On 2008-07-17 at 23:40:36 [+0200], Oliver Tappe <zooey@xxxxxxxxxxxxxxx> wrote:
> With a netserver listening on localhost, netperf has been invoked as follows:
>   netperf -f M -t UDP_STREAM -- -m 32768 -s 65500,65500 -S 65500,65500
>   netperf -f M -t TCP_STREAM -- -m 32768 -s 65500,65500 -S 65500,65500
>   netperf -t UDP_RR
>   netperf -t TCP_RR
> 
> These were the results (please excuse the ascii-art):
> =================================================================
> Operating          | UDP_STREAM | TCP_STREAM | UDP_RR  | TCP_RR
> System             |   (MB/s)   |   (MB/s)   | (req/s) | (req/s)
> -------------------+------------+------------+---------+---------
> Ubuntu (VM)        |         88 |         52 |    7549 |    7379
> -------------------+------------+------------+---------+---------
> Zeta (VM)          | send:  182 |        109 |    3256 |    3256
>                    | recv:   42 |            |         |
> -------------------+------------+------------+---------+---------
> OpenSolaris (VM)   | send:  567 |        195 |    2295 |    2653
>                    | recv:   49 |            |         |
> -------------------+------------+------------+---------+---------
> haiku (VM)         |          2 |         29 |    1510 |     911
-------------------------------------------------------------------
current haiku (VM)   | send:   64 |         65 |    1730 |    1636
                     | recv:    4 |            |         |
> ===================+============+============+=========+=========
> OpenSUSE (native)  |        737 |        576 |   41932 |   32091
> -------------------+------------+------------+---------+---------
> haiku (native)     |          2 |        127 |   15898 |   12216
-------------------------------------------------------------------
current haiku        | send:   65 |        117 |   15745 |   11976
(native)             | recv:   41 |            |         |
> =================================================================

So it seems that the tiny fixes and cleanups I applied a week ago more or less 
fixed the pathological UDP performance.

In case you wonder why the values for "send" and "recv" differ quite 
drastically in certain cases: the cause of that is an attribute of the 
localhost interface implementation, where two threads (receiver and sender) 
are racing each other - Axel had pointed that out in his answer to my earlier 
post. The great difference can be avoided by specifying larger receive 
buffers, but I didn't want to manipulate the tests (as it is not really 
important and other OSes show that problem, too).

Following are some tests executed on a local gigabit ethernet (nforce) against 
a Linux server:
=================================================================
Operating          | UDP_STREAM | TCP_STREAM | UDP_RR  | TCP_RR
System             |   (MB/s)   |   (MB/s)   | (req/s) | (req/s)
-------------------+------------+------------+---------+---------
openSUSE           |        113 |        111 |     211 |    9714
-----------------------------------------------------------------
haiku              |         38 |         40 |     595 |     632
=================================================================

In a real world example, I tried to get and put a 520 MB file via FTP, again 
using the nforce driver across a Gb ethernet:
==============================================
Operating          |   FTP get  |   FTP put  |
System             |   (MB/s)   |   (MB/s)   |
-------------------+------------+-------------
openSUSE           |         51 |         48 |
----------------------------------------------
haiku              |         35 |         34 |
==============================================

Judging from all these tests, I'd say that haiku's network stack currently 
shows far-from-perfect, but certainly acceptable performance.

As Marcus has mentioned, the problems of our current scheduler might have a 
rather large impact on the net stack on real hardware, so I will stop my 
little networking tests right here and move my attention elsewhere (most 
probably towards missing features like PMTU detection).

cheers,
    Oliver

Other related posts: