Performance statistics are hard to gather somewhat, and harder to use
generically.
The difficulty of gathering is that you need to measure entire latency;
really you want to modify the wire protocol to carry timestamps. Some
performance data we can report, such as queue depth, etc.
The difficulty of use is that everyone’s system and applications are so
wildly different that what might be perfectly normal for one user might be
tragically bad for another.
In actuality, there has been very little demand for performance-related bug
analysis, I can recall precisely one user over he years griping about
inproc performance vs. ZMQ. My response at the time was “I don’t care”.
(Because inproc performance was never terribly important to me since I
don’t think it sees much real world use, and I also had bigger fish to fry
around trying to fix the MT-safety things in inproc. libnng is me
basically giving up on that. I couldn’t figure out how to make libnanomsg’s
inproc completely thread-safe without completely trashing the performance.
The FSM architecture made passing data across the different sockets (which
had different lock hierarchies) too damned difficult.
On Fri, Jan 6, 2017 at 7:00 AM, Karan, Cem F CIV USARMY RDECOM ARL (US) <
cem.f.karan.civ@xxxxxxxx> wrote:
You were mentioning that you were planning on adding statistics gathering
code; have you considered adding code that gathers performance statistics
as well? As end users we could send you bug reports that include those
statistics, which would let you know if it’s a problem or not.
Thanks,
Cem Karan
-----Original Message-----On Behalf Of Garrett D'Amore
From: nanomsg-bounce@xxxxxxxxxxxxx [mailto:nanomsg-bounce@xxxxxxxxxxxxx]
Sent: Thursday, January 05, 2017 1:37 PMthe identity of the sender, and confirm the authenticity of all links
To: nanomsg@xxxxxxxxxxxxx
Subject: [nanomsg] Re: [Non-DoD Source] status update libnng
All active links contained in this email were disabled. Please verify
contained within the message prior to copying and pasting the address toa Web browser.
these things, I’ll switch to an array.
________________________________
I could certainly make it an array. :-)
If it seems to be universal that nobody needs more than a few hundred of
O(n) vs O(log(n)) thing though. If we know that O(n) is good
I wasn’t so much worried about list traversal times as I was about the
enough, then certainly an array is probably going to be superior. Onething about an array, is that if I keep it sorted, then subscribe
operations will become more expensive, since I’ll have to shuffle onaverage half the items in the array. I’m assuming subscription
changes occur less frequently than messages arrive. (I’d really like toknow if anyone has an application where changes to subscriptions —
subscribe/unsubscribe — is a performance sensitive operation.)Caution-mailto:danielcccc@xxxxxxxxx ;> > wrote:
- Garrett
On Thu, Jan 5, 2017 at 10:23 AM, Daniel C <danielcccc@xxxxxxxxx <
than 10 or 20 in my case.
I agree with Jason; an array would be faster. N definitely less
< Caution-mailto:j.e.aten@xxxxxxxxx ;> > wrote:
Dan
On Thu, Jan 5, 2017 at 7:58 AM, Jason E. Aten <j.e.aten@xxxxxxxxx
faster than a linked list.
For a small N < 10, I would expect an array to be much
garrett@xxxxxxxxxx < Caution-mailto:garrett@xxxxxxxxxx ;> > wrote:
My typical use is for small N, about 3.
On Thu, Jan 5, 2017 at 4:52 AM, Garrett D'Amore <
a non-portable nightmare. I almost went down the path of
TCP works now. OMG, close() vs. accept() is such
pthread cancellation. Thankfully, that crisis is now averted. I neverrealized just how broken the use of UNIX file descriptors in the face of
close() and multiple threads really are. But its working now . Thankgoodness for shutdown(2) and dup2().
started PUB/SUB. Sub is actually done, untested.
There’s a test suite for TCP and inproc, and I’ve
protocols and IPC before the weekend.
I give 50/50 odds on me finishing up the remaining
the eventing framework (necessary for file descriptor based
What won’t happen before the weekend is writing
notifications for example), or statistics framework. And Windows willprobably happen next week. I’ve designed this such that I think I
will be able to crank out Windows stuff super fast.simple sorted linked list for subscriptions. My instinct is that
One question about PUB/SUB. I’ve for now used a
for the vast vast majority of folks, this is not only sufficient, butprobably superior to a patricia tree; I think most of us only maintain
around a dozen or so subscriptions on any subscriber at any given time.That said, I’m keen to hear of actual uses of nanomsg where the
patricia tree makes a real difference — cases with over several hundredactive subscriptions on a given end node. (Note that this is only
on the the end node; a server can service many many clients — thousands— each having dozens of subscriptions — with zero negative
impact in my design.)messages is now an O(n) instead of O(log(n)) operation (where n is
To be clear, the linked list means that filtering
the number of active subscriptions); I suspect that for small values ofn, the linked list approach I’ve taken is faster. I have implemented
this way for now for expediency, but I’m open to pulling in a patriciatree if there is need. (I didn’t yank Martin’s old code because I want
to make sure I thoroughly understand it before I bring it into the newcode base — I haven’t had time to be certain of that yet.
garrett@xxxxxxxxxx < Caution-mailto:garrett@xxxxxxxxxx ;>
On Wed, Jan 4, 2017 at 2:18 AM, Garrett D'Amore <
TCP. Turns out it was bit more code than anticipated, as I want towrote:
I’ve just committed the initial swag at
support TCPv4 and TCPv6, and properly support different platforms thatmight have different ways to resolve names, or handle low level
TCP details. This should make it really fast to write the winsock codelater.
that transports that want to build on top of TCP (e.g. websocket
I’ve also tried to abstract the details so
or TLS) can do so easily.last commit if you want to see what I’m up to. The main missing
Totally untested, but you can look at the
thing (besides testing that it actually works) is support for some ofthe richer TCP options — e.g. KEEPALIVEs etc. I am disabling Nagle by
default, because I think 99% of nanomsg users have no business wantingNagle turned on. (And I’ve taken care to use writev to avoid
splitting writes up across separate packets when possible. This issomething that sadly isn’t possible using Go….)
Cem F CIV USARMY RDECOM ARL (US)
On Tue, Jan 3, 2017 at 10:54 AM, Karan,
<cem.f.karan.civ@xxxxxxxx < Caution-mailto:cem.f.karan.civ@xxxxxxxx ;> >wrote:
forgot that inproc is a thing; for some reason I thought you were
Got it, my brain glitched and
building out your own TCP stack, which would have made me... concerned.;)
nanomsg-bounce@xxxxxxxxxxxxx < Caution-mailto:nanomsg-bounce@xxxxxxxxxxxxx
Thanks,
Cem Karan
> -----Original Message-----
> From:
nanomsg-bounce@xxxxxxxxxxxxx > ] On Behalf Of Garrett D'Amore
[Caution-mailto:nanomsg-bounce@xxxxxxxxxxxxx ;< Caution-mailto:
> Sent: Tuesday, January 03, 20171:23 PM
> To: nanomsg@xxxxxxxxxxxxx <Caution-mailto:nanomsg@xxxxxxxxxxxxx ;>
> Subject: [nanomsg] Re: [Non-DoDSource] status update libnng
>this email were disabled. Please verify the identity of the sender,
> All active links contained in
and confirm the authenticity of all linksprior to copying and pasting the address to a Web browser.
> contained within the message
>TCP working. Right now the TCP transport is not written,
>
> ________________________________
>
>
>
> I’m about 24 hours from having
though I’ve written a fair bit of it locally.now, as is PAIR, and inproc is working quit well and totally
>
> The REQ/REP pattern is working
thread safe.— I’m a big believer in mutexes. I need mutexes and condition
>
> As far as lockless algorithms go
variables to make things work well, and insensibly (with minimal contention), they do not significantly
> my experience if they are used
impact performance.structures are superior, but you have to have use cases that work
>
> For some things lockless data
for them; with mutexes and condvars Ithey Just Work.
> don’t have to think about this;
>alternative designs for that in the future, but I’m only going to do
> It may be useful to look at
that once I’ve demonstrated a clear need orwill contribute at that point). Until then, I need to focus on
> benefit (or maybe someone else
getting the code working with a sensibleoptimizing things.
> design and avoid prematurely
>Karan, Cem F CIV USARMY RDECOM ARL (US)
> - Garrett
>
>
> On Tue, Jan 3, 2017 at 6:45 AM,
<cem.f.karan.civ@xxxxxxxx < Caution-mailto:cem.f.karan.civ@xxxxxxxx ;>< Caution-
> Caution-mailto:cem.f.karan.civ@xxxxxxxx < Caution-mailto:cem.f.karan.civ@xxxxxxxx ;> > >
wrote:Message-----
>
>
> > -----Original
> > From:nanomsg-bounce@xxxxxxxxxxxxx < Caution-mailto:nanomsg-bounce@xxxxxxxxxxxxx
nanomsg-bounce@xxxxxxxxxxxxx > > [Caution-Caution-
< Caution-Caution-mailto:nanomsg-bounce@xxxxxxxxxxxxx ;< Caution-mailto:
mailto:nanomsg- ;< Caution-mailto:nanomsg- ;>Caution-mailto:bounce@xxxxxxxxxxxxx ;> < Caution-Caution-
> bounce@xxxxxxxxxxxxx <
mailto:nanomsg-bounce@xxxxxxxxxxxxx ;< Caution-mailto:nanomsg-bounce@freelists.org > > ] On Behalf Of Garrett D'Amore
> > Sent: Monday, December26, 2016 2:46 AM
nanomsg@xxxxxxxxxxxxx < Caution-mailto:nanomsg@xxxxxxxxxxxxx ;> <
> > To:
Caution-Caution-
mailto:nanomsg@xxxxxxxxxxxxx ;< Caution-mailto:nanomsg@xxxxxxxxxxxxx ;> >Source] [nanomsg] status update libnng
> > Subject: [Non-DoD
> >(western) update…
> > Just a brief Christmas
> >connections. There’s a lot more to this than you’d think — and a
> > libnng is now managing
lot of details that I’ve written underYears I’ll have it in a functional state for TCP for POSIX systems, for
> the hood.
> >
> > I expect that by New
at least the primary patterns /of hours from that point.
> protocols. TBH,
> > I’m really only a couple
> >code base is pretty much crash-immune with respect to things that
> > Note that the current
caused grief in libnanomsg — e.g.libnng. The whole approach has been from a “correct first”, so I
> ENOMEM
> > errors will not impact
expect we will have fewer problems oncethis.
> we
> > actually start using
> >quite quickly actually.
> > Things are moving apace…
> >to review or give feedback on teh existing code, I’m now at the
> > If anyone is so inclined
point to receive it, understanding thatstill unimplemented. You can’t use it for TCP for example. But at this
> there are
> > still large swathes
point I’d rather solicit feedback earlywhat’s missing, but if you see problems with what’s already there,
> rather than
> > late. Don’t tell me
please *do* let me know.for the next several days, but will probably work in the evenings
> >
> > I’m going on a ski trip
getting the thing to a state where otheractual experimentation.
> folks can
> > start playing with for
> >forward to benchmarking this thing. I think we’re gonna blow the doors
> > I’m really looking
off libnanomsg, at least for platformseveryone! :-)
> that have
> > non-crappy pthreads. :-)
> >
> > Merry Christmas
>hard work!
> Thank you for all your
>can't use it for TCP, what exactly do you mean?
> BTW, when you say that you
>
> Thanks,
> Cem Karan
>
>
>