Thank you.
There is a per message overhead for several reasons:
- allocation on each send, because the socket takes ownership of the
message when sending.
- there is a channel between the user thread and the I/O thread, it is
crossed once by the sent message and once by the 'return code'.
- there is no prefetch of incoming messages.
- The way the protocol state machine is interacting with readiness polling
is very naive.
- Others I have not found yet ?!
On mer., 14 déc., 2016 at 4:59 , Garrett D'Amore <garrett@xxxxxxxxxx> wrote:
Good work. It sounds like you have some per message overheads to cope
with. I don’t have any brilliant ideas, but I’ve not researched your code
yet.
On Wed, Dec 14, 2016 at 1:19 AM, Benoit Labaere <benoit.labaere@xxxxxxxxx>
wrote:
Instead of writing a benchmark, I have ported the performance measurement
utilities found in the perf folder of the nanomsg repository, ran them and
published the results. To sum up:
- latency is bad, but gets better as the message gets bigger
- throughput is outrageously low, but gets '*better'* as the message gets
bigger
I will need quite some time to investigate these results.
Here is the full report, to be viewed in fixed width font.
*Average latency (µs)*
| Msg Size | Roundtrips | Nanomsg | Scaproust |
| 512 | 50000 | 19 | 26 |
| 1024 | 10000 | 21 | 27 |
| 8192 | 10000 | 23 | 29 |
| 102400 | 2000 | 56 | 61 |
| 524288 | 500 | 323 | 216 |
| 1048576 | 100 | 794 | 782 |
*Average throughput (Mb/s)*
| Msg Size | Msg Count | Nanomsg | Scaproust |
| 512 | 1000000 | 3091 | 312 |
| 1024 | 500000 | 5511 | 605 |
| 8192 | 50000 | 13865 | 3700 |
| 131072 | 10000 | 19694 | 18798 |
| 524288 | 2000 | 16215 | 20708 |
| 1048576 | 1000 | 12501 | 11733 |
Regards,
Benoît
On Tue, 22 Nov 2016 at 09:21 Dirkjan Ochtman <dirkjan@xxxxxxxxxx> wrote:
On Mon, Nov 21, 2016 at 11:11 PM, Benoit Labaere
<benoit.labaere@xxxxxxxxx> wrote:
Scaproust v0.2.0 has just been released on crates.io.
Scaproust is a 100% Rust implementation of Nanomsg.