[nanomsg] Re: ZMQ_ROUTER like functionality

  • From: David Robillard <d@xxxxxxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Wed, 17 Sep 2014 16:00:39 -0400

On Wed, 2014-09-17 at 20:23 +0300, Paul Colomiets wrote:
> Hi,
> 
> On Wed, Sep 17, 2014 at 7:50 PM, Dirkjan Ochtman <dirkjan@xxxxxxxxxx> wrote:
> > On Wed, Sep 17, 2014 at 6:42 PM, David Robillard <d@xxxxxxxxxxxx> wrote:
> >> It seems that raw nanomsg sockets make it possible to do such things,
> >> but how to go about it isn't clear to me.
> >
> > They are, and there are numerous discussions about it in the mailing
> > list archives. The gist of it is that a recv on an AF_RAW socket gives
> > you a small header plus the actual message, and the header will allow
> > you to route it back to the appropriate client. Sorry that we don't
> > have any particularly useful documentation for this; do ask more
> > questions in this thread after digging through the list archive for a
> > bit.
> >
> 
> It's not exactly true. The ZMQ_ROUTER allows you to connect ROUTER to
> ROUTER, along with some other features making the combination useful.
> But in nanomsg you can't do that.
> 
> Nanomsg is designed to avoid non-scalable things. And stateful routing
> scheme you described is inherently non-scalable. While there were
> attempts to build scalable stateful routing pattern, it have not been
> done by anyone yet.

Well, "scalable" and "handled magically by nanomsg" aren't quite the
same thing.  In this case, the load-balancing and routing to particular
workers is inherently application specific.

The system is very scalable (partially by virtue of both servers and
workers being multi-threaded with ZMQ-based inproc load balancing), but
this particular part of it isn't the sort of thing any network library
could do on its own.

(The application is a very large database of sorts, fragmented across
workers, so workers are inherently not interchangeable since one worker
can not store the entire data set)

> So yes, the only solution so far is to make a socket per worker.

Socker per server thread per worker, in this case.  Sharing worker
sockets might be doable, but contention would hurt scalability of the
server and I doubt this solution would be any good.

If many sockets isn't inherently a performance issue, then that should
be fine.  Memory usage on the server isn't particularly limited.

Thanks,

-- 
dr



Other related posts: