[nanomsg] Re: REQ/REP worker example

  • From: Ramakrishna Mallireddy <ramakrishna.malli@xxxxxxxxx>
  • To: "nanomsg@xxxxxxxxxxxxx" <nanomsg@xxxxxxxxxxxxx>
  • Date: Tue, 17 Jun 2014 13:59:50 +0530

please look at this sample code  http://pastebin.com/rqpdaQmH

This is what I am trying to do,

I have a webserver that is generating request's to end-point
"ipc://oauth2". [ Not in the sample ]
I am running a process with 8 Threads, out of which 1 is used for nn_device
broker and remaining 7 for serving Replies to end-point
"ipc://oauth2handler"
The two end-points are connected to the nn_device broker.[sample code
@pastebin link attached]

Now when I run the sample process, I am not able to see any data/logs at
nn_device or at reply end-point.
The request from my webServer log shows, nn_send is successful with number
of bytes sent, but neither nn_device nor  reply end-point threads shows any
debug logs/outputs, I compiled nanomsg with debug enabled.

nn_device broker and all the reply threads are running but waiting at
nn_recv.

Can anyone point me where I am going wrong, can I use Wireshark or any
similar process to view nanomsg network data.

Thanks
R K.




On Mon, Jun 16, 2014 at 7:11 PM, Ramakrishna Mallireddy <
ramakrishna.malli@xxxxxxxxx> wrote:

> Hi Drew,
> All my requests are originates from single/multiple web-servers and all
> requests should mostly run concurrently at the REP handler app/server.
>
> did I choose the best pattern[REQ/REP] for my problem ?
>
> Assuming that REP/REQ fits my problem domain, I am working with nn_device
> with a thread pool of REP sockets connected to the REQ end-point of
> nn_device and
> I have the following queries regarding this setup,
>
> Is there any callback/API to check waiting request at nn_device.
>
> Keeping latency & performance in mind, can we explore option (b) or stick
> with option (a).
> a. Have predefined number of threads, created, initialised and connected
> to nn_device. All sockets are in receive mode.
> b. Have predefined Threads but dynamically create / destroy such that one
> REP thread will be waiting for request till predefined number of threads
> are reached.
>
> If I am working with in LAN, then specifying local interface like "
> tcp://eth0;192.168.0.111:5555" is the only and best option.
>
> Thanks
> R K
>
>
> On Fri, Jun 13, 2014 at 1:52 PM, Ramakrishna Mallireddy <
> ramakrishna.malli@xxxxxxxxx> wrote:
>
>> Hi Drew,
>>
>> Thanks for a quick reply, I have not looked into nn_device(). I will look
>> into it and revert back.
>>
>> Thanks
>> R K
>>
>>
>> On Fri, Jun 13, 2014 at 11:13 AM, Drew Crawford <drew@xxxxxxxxxxxxxxxxxx>
>> wrote:
>>
>>> Hi RK,
>>>
>>> You’re correct that “full-blown” (e.g. AF_SP) sockets don’t process more
>>> than one message at a time.  So a REQ socket has at most one request
>>> outstanding, and same for REP.  There has been talk about removing this
>>> limitation, and I think an API was agreed to, in principle, but there
>>> hasn’t been any implementation.
>>>
>>> Raw sockets (e.g. AF_SP_RAW) are sort of the superclass of an AF_SP
>>> socket, and they do not have this limitation.  However they don’t have full
>>> functionality either, and so the application programmer would have to do
>>> more work to make them compatible with a standard sockets, particularly
>>> when a raw server tries to respond to a full client by using nn_send().
>>>
>>> In the absence of a better implementation for “multiflight” req/rep,
>>> you’re going to want to create a REP worker pool, where each REP socket in
>>> the pool runs on its own thread.  This should be pretty straightforward
>>> since each REP worker is basically a synchronous REP implementation,
>>> exactly like the sample code.
>>>
>>> Then the question becomes how to assign a request to a particular
>>> worker.  The simplest way is just to publish a list of your REP workers and
>>> the client REQ socket connects to all of them.  In the standard
>>> configuration, the REQ socket knows how to choose some available REP socket
>>> it is connected to, to handle its request, so that is the end.
>>>
>>> However, you may not want to publish a list of all workers down to your
>>> clients, because you change them dynamically, or want to hide the server
>>> implementation details, etc.  In this case you would set up a broker
>>> thread/process, that would know the details of your workers.  And this
>>> broker process would receive client requests on a REP socket, and emit them
>>> on a REQ socket, and receive worker responses on its REQ socket that it
>>> then posts back to the client(s) on the REP socket. That worker-side REQ
>>> socket being connected to the complete pool of workers and since reqrep
>>> automagically knows how to choose an available REP socket from the sockets
>>> a REQ is connected to, then things just work.
>>>
>>> One problem. If the broker’s logic looked like this:
>>>
>>> int broker() {
>>>     int req = nn_socket(NN_REQ,AF_SP);
>>>     int rep = nn_socket(NN_REP,AF_SP);
>>>     while(1) {
>>>         nn_recv(REP,…);
>>>         nn_send(REQ,…);
>>>         nn_recv(REQ,…);
>>>         nn_send(REP,…);
>>>     }
>>> }
>>>
>>> …then the broker will only process one message at a time.  This is
>>> because our hypothetical broker has the same limitation that you describe,
>>> e.g. it uses full-blown sockets, where only one request can be inflight at
>>> a time on either side.
>>>
>>> To solve this, nanomsg ships with a built-in broker, called nn_device().
>>>  nn_device uses raw sockets, and allows messages to flow freely between the
>>> sockets, without limitations as to “one request at a time".  Since
>>> nn_device does not actually handle any requests (but simply passes them to
>>> a socket where they are eventually handled by a full-blown socket), this
>>> avoid a lot of the problems that would arise if you tried to respond to
>>> client requests within the raw socket yourself by calling
>>> nn_send(my_raw_rep_socket,…);
>>>
>>> Drew
>>>
>>>
>>>
>>> On Jun 13, 2014, at 12:04 AM, Ramakrishna Mallireddy <
>>> ramakrishna.malli@xxxxxxxxx> wrote:
>>>
>>> > I am new to nanomsg and I am looking to implement a client/server
>>> service using REQ/REP.
>>> >
>>> > Server is a stateless service, but depends on another google service
>>> for reply to incoming requests. REP Service should be either asynchronous,
>>> so that it can handle other REQ requests in the queue while data gets ready
>>> form google service, or it should start workers[ideally from pool] to
>>> process REQ request in the queue.
>>> >
>>> > But I have not found any examples that does anyone of the above or may
>>> be I am missed some docs.
>>> >
>>> > The example req/rep is targeted at synchronous REP implementations.
>>> >
>>> > I am looking for, how to start a worker / create a pool of workers at
>>> service start and assign worker to each incoming request.
>>> >
>>> > Or, with asynchronous REP how can I send the reply to the
>>> appropriate/correct client/REQ when data gets ready.
>>> >
>>> > I have seen a thread regarding SP RAW that can be used, but not sure
>>> how it helps in my case.
>>> >
>>> > I appreciate any help or guidance.
>>> >
>>> > Thanks
>>> > R K.
>>> >
>>> >
>>>
>>>
>>>
>>
>

Other related posts: