[nanomsg] Re: More Go nanomsg updates

  • From: Gonzalo Diethelm <gonzalo.diethelm@xxxxxxxxx>
  • To: nanomsg <nanomsg@xxxxxxxxxxxxx>
  • Date: Mon, 14 Apr 2014 14:18:51 -0300

Garrett, what is the state of support in Go for IPC under Windows (ie Named
Pipes)? Could this be something easily added to gonano (to choose a
specific name)?

Again, thanks for all your work, it is great for the project to have
alternative implementations.



On Mon, Apr 14, 2014 at 12:58 PM, Garrett D'Amore <garrett@xxxxxxxxxx>wrote:

> More Go nanomsg updates: I’ve added the missing patterns, so right now the
> only functionality that nanomsg has that my implementation is a common
> Device() function, and a nanocat command line utility.  I’ll be adding
> those soon.  (And then I’ll turn my attention to adding TLS to nanomsg
> itself, and probably websocket to both nanomsg and my implementation.)
>
> In addition, as before, it still supports the TLS transport, and a new
> STAR pattern, which are unique to this implementation (not supported in
> nanomsg.)  (Martin, I’m using protocol number 100 for STAR, for now.  We
> should probably define a region of numbers for experimental
> protocols/patterns which might not be considered “portable” between
> implementations in applications.)
>
> As a little bonus, I did some more minor tuning, and my req/rep latency
> dropped from 6458 ns/op to 5339 ns/op.  That means that this implementation
> is now about 20% *faster* than native nanomsg when using inproc.  The other
> transports got minor improvements, generally most benefiting the single
> threaded (GOMAXPROCS=1), i.e. default configuration, run, yielding between
> 1 and 8 usec (or about 15%-30%) improvement depending on the transport.
> This brings me within spitting distance of nanomsg for TCP and IPC,
> although I think I’m still paying some penalties for the extra distance
> from the native file descriptors that Go forces me to pay.  (For exmaple,
> IPC is still about 3 usec slower than native nanomsg — 18.3 usec vs 15.7
> usec for nanomsg.)
>
> Sadly, when running multithreaded, we don’t see quite the same benefit for
> latency, and throughput suffers about 2-3%.  (When running single threaded,
> the performance *improves* by about the same amount.)
>
> There are also now options to configure timeouts, etc. and a “common” test
> framework (which needs a lot more work to extend it.)
>
> As usual, the code can be found here:  https://bitbucket.org/gdamore/sp
> --
> Garrett D'Amore
> Sent with Airmail
>



-- 
Gonzalo Diethelm
gonzalo.diethelm@xxxxxxxxx

Other related posts: