Thanks Achille. Just curious why we don't use shared memory for IPC so we get the fastest implementation. I guess domain sockets is a lot easier to deal with and doesn't require locks? Thanks, Ron Sent from my iPhone > On Sep 4, 2014, at 7:39 PM, Achille Roussel <achille.roussel@xxxxxxxxx> wrote: > > You have 3 transport protocols in nanomsg, inproc, ilc and tcp. > > - inproc: communication within a process > - ipc: communication for processes on the same host > - tcp: communication for processes on different hosts > > tcp is going to be a bit slower than ipc, it goes through the network, there > will be a higher latency than if the messages stay on the same host but I > don’t understand how it would be useful to benchmark these two transports > against each other, unless you plan on using tcp for host-local > communication, in that case it depends on how efficient the named pipe and > tcp stack are on the OS… but really with how simple nanomsg makes it to use > one transport or the other you should really use the right tool for your > use-case. > > Or maybe I’m not understanding your question very well. > >> On Sep 4, 2014, at 7:13 PM, Ron's Yahoo! (Redacted sender >> "zlgonzalez@xxxxxxxxx" for DMARC) <dmarc-noreply@xxxxxxxxxxxxx> wrote: >> >> Hi, >> If I were to use nanomsg to do interprocess communication on the same host, >> how would I do that in the most efficient manner? >> And have we done some benchmarks on the performance difference of doing >> interprocess communication between processes on the same host vs different >> hosts? If it’s a different host, what is the fastest and reliable protocol >> that can be used? >> >> Thanks, >> Ron > >