[nanomsg] Re: interesting performance figures

  • From: Garrett D'Amore <garrett@xxxxxxxxxx>
  • To: Peter Kümmel <syntheticpp@xxxxxxx>, nanomsg@xxxxxxxxxxxxx
  • Date: Thu, 27 Mar 2014 08:17:51 -0700

-- 
Garrett D'Amore
Sent with Airmail

On March 27, 2014 at 3:14:44 AM, Peter Kümmel (syntheticpp@xxxxxxx) wrote:

On 27.03.2014 06:41, Garrett D'Amore wrote: 
> http://garrett.damore.org/2014/03/early-performance-numbers.html 

Interesting numbers! (I would also add a column with the ratio) 

Looks like inproc performs best in comparison to nanomsg, and gives 
the only test where Go beats nanomsg (4k messages, C:4MB/s Go:5MB/s). 
I was actually surprised that there were *any* such cases.  The nanomsg code is 
pretty mean and lean. That said, I might see better scalability with thousands 
of clients, but I’ve not written test cases for that.  Its not a pressing 
concern for me at the moment. :-)



Is the inproc implementation just a wrapper around Go's message massing 
mechanism, and thus highly optimized by the Go developers? 


Yes.  However, there is one area where I have not optimized the inproc …. and 
that is I wind up doing on extra message copy, because of the awkward split 
between header and payload.   I can do some optimization there, but I think 
inproc is probably not a commonly used use case.

Btw, with inproc now working, it should definitely be possible to try this out 
on play.golang.org — I haven’t done so yet, but I will probably do so if I find 
some free time later today.  



I wonder if you have an idea how good other inproc frameworks perform, 
especially Qt's queued connections. 
Nope, haven’t looked.



        - Garrett





Peter 


> 
> That’s my brief update on my work. I’ve added inproc, PAIR protocol support, 
> and benchmarks (tests). I know folks want 
> UDP… I’m a little concerned about proceeding with UDP only because of the 
> difficulty of keep track of pipes on a 
> connectionless (and no keepalives) transport. For req/reqp, this could get 
> ugly. I need to think about this more. 
> Its also unclear how pub/sub works in this case, since there isn’t an actual 
> “connected” end point to keep track of. 
> The semantics get… sticky. And as others have pointed out, there may be other 
> concerns about enabling UDP (congestion 
> control, etc.) I don’t know what the right answer here is. (For me, its to 
> stick with TCP. :-) But my use cases are 
> relatively simple and I’m not trying to stream or to multiplex a bunch of 
> different things onto a single TCP channel. :-) 
> 
> — Garrett 
> 


Other related posts: