[nanomsg] Re: Where "Channel ID" are managed in the code ?

  • From: Paul Colomiets <paul@xxxxxxxxxxxxxx>
  • To: "nanomsg@xxxxxxxxxxxxx" <nanomsg@xxxxxxxxxxxxx>
  • Date: Tue, 14 Jan 2014 11:30:49 +0200

Hi Laurent,


On Tue, Jan 14, 2014 at 11:00 AM, Laurent Alebarde <l.alebarde@xxxxxxx>wrote:

>  Hi Martin,
>
> Thank you for your worth reply.
>
> Le 14/01/2014 09:09, Martin Sustrik a écrit :
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi Laurent,
>
> On 13/01/14 19:36, Laurent Alebarde wrote:
>
>  Analysing rep.c / nn_rep_recv, my understanding is that the
> backtrace (channel IDs stack) is stored in the socket and that if
> the REP socket receives a second message before nn_rep_send was
> performed on the first one, then the first message is cancelled. Do
> you confirm ?
>
>  Yes, the stack from the request is stored in the socket, so that it
> can be re-attached to the reply.
>
> And no. If new request is received while on old one is still being
> processed it's queued in TCP buffers and not yet read by nanomsg.
>
>  Great
>
>   For intermediate devices, I can imagine the send is prioritary over
> the receive. Then I assume no message is cancelled. But for an end
> point, say a worker, what happens if the worker processing is long
> compared to the typical requests interval ? Isn't the service down
> as all messages may be cancelled ?
>
>  The pushback is applied. I.e. While the device is processing a message
> it doesn't read any more messages from TCP.
>
>  Yes, that's logic with your first answer.
>
>  All in all, to achieve your goal I would suggest doing the following:
>
> 1. Add new socket option, say NN_REQ_SCHEDULER, with possible values
> of NN_REQ_ROUND_ROBIN (current implementation, the default) and
> NN_REQ_STICKY. The latter, when applied to a REQ socket would cause
> requests from the same client to be sent, if possible, to the same
> worker. If the old worker is not available, the message will be sent
> to a different worker.
>
>  In my use case, if the message is sent to a different worker, it will
> fail as it is stateful. So the best way here is to drop the message IMO. I
> think NN_REQ_STICKY is useful only for statefull messaging. And in
> statefull processing, reassigning one peer to another fails anyway, or you
> have to add some management to reinitialize the exchange from the
> beginning. It is useless complexity. It is not only performance
> optimisation to use cache data as most as possible. Here, I have no mean to
> recover from the new worker. But probably there is the need for two
> different behaviours of NN_REQ_STICKY. I will do it with a simple drop.
>

Have you seen this thread?
//www.freelists.org/post/nanomsg/Trying-to-implement-directory-pattern

It's not implemented yet. But does it match your use case? It may better to
invest time into this stuff, because presumably it serves more use cases
than NN_REQ_SCHEDULER.

From the opposite point of view if we add kinda NN_SHARDING option to all
patterns, it may be interesting to have sharded patterns not only for
request-reply (like directory patterns describes), but for pipeline and
other patterns.

--
Paul

Other related posts: