[nanomsg] Re: help with application level message payload

  • From: Christian Bechette <christian.bechette@xxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Thu, 20 Feb 2014 15:18:09 -0500

Thanks for the feedback. My ultimate goal is to preserve buffer ownership
at the application level. I'm a bit puzzled at nn_allocmsg's role since
even though nn_allocmsg allows me to allocate a buffer, it still defers
ownership to nanomsg whenever nn_send is called, making it impossible to
pool its memory. I'm mostly concerned with heap fragmentation and such on a
media streaming application that runs 24/7.

In an ideal world I would also need to pass multiple application-allocated
payloads per message, which is not allowed in nn_sendmsg. Seems like it's
supported at the WSASend level but not through the nn api?


On Thu, Feb 20, 2014 at 1:54 PM, zerotacg <zero@xxxxxxxxxxxxxxxx> wrote:

> what you describe seems to me like default behavior [1] if u allocate
> the message yourself
>
> [quote]
> Alternatively, to send a buffer allocated by nn_allocmsg(3) function set
> iov_base to point to the pointer to the buffer and iov_len to NN_MSG
> constant. In this case a successful call to nn_send will deallocate the
> buffer.
> [/quote]
>
>
> [1] http://nanomsg.org/v0.2/nn_sendmsg.3.html
>
> On 20.02.2014 17:55, Christian Bechette wrote:
> > Hello all, I'm currently using nanomsg for a media streaming project. If
> > possible I'd appreciate your input on my current situation;
> >
> > For example in a TCP PAIR scenario, is it possible for nanomsg to use my
> > own application level message payload and not allocate a new one every
> > time I do nn_sendmsg? In the case of a blocking send I imagine the io
> > completion port will guarantee that the payload can be discarded after
> > leaving nn_sendmsg.
> >
> > The same applies to the receiving end, is there a way it could write
> > into my own buffers?
> >
> > Thanks for reading :)
> >
> > Christian Bechette
>
>
>


-- 
Christian.Bechette@xxxxxxxx
Tightrope Media Systems
(866)-866-4118 x229

Other related posts: