[nanomsg] Re: ReqRep high performance

  • From: Bent Cardan <bent@xxxxxxxxxxxxxxxxxxxx>
  • To: "nanomsg@xxxxxxxxxxxxx" <nanomsg@xxxxxxxxxxxxx>
  • Date: Wed, 21 Jan 2015 03:10:07 -0500

yea I was just using PUB/SUB

On Wed, Jan 21, 2015 at 2:33 AM, junyi sun <ccnusjy@xxxxxxxxx> wrote:

> what is the pattern do you use in your node.js wrapper test?
>
> If you use PUB/SUB or PUSH/PULL, it is expected.
>
> On Wed, Jan 21, 2015 at 3:27 PM, Bent Cardan <bent@xxxxxxxxxxxxxxxxxxxx>
> wrote:
>
>> I'm capping out at around 140,000 msg/s
>>
>> that's with my little javascript wrapper,
>> https://github.com/reqshark/nanomsg.iojs
>>
>> on my laptop, msg latency below measured in JavaScript Date.now()
>> milliseconds
>>
>> ✘-130 *bent**@**quad* */Users/bent/nmsg/nanomsg.iojs * [*master*|● 1✚ 4]
>>
>> 02:23 $ node v8
>>
>> msg count: 10000, msg latency: 108
>>
>> msg count: 20000, msg latency: 192
>>
>> msg count: 30000, msg latency: 288
>>
>> msg count: 40000, msg latency: 349
>>
>> msg count: 50000, msg latency: 413
>>
>> msg count: 60000, msg latency: 496
>>
>> msg count: 70000, msg latency: 549
>>
>> msg count: 80000, msg latency: 606
>>
>> msg count: 90000, msg latency: 701
>>
>> msg count: 100000, msg latency: 752
>>
>> msg count: 110000, msg latency: 848
>>
>> msg count: 120000, msg latency: 904
>>
>> msg count: 130000, msg latency: 956
>>
>> msg count: 140000, msg latency: 1056
>>
>> msg count: 150000, msg latency: 1117
>>
>> msg count: 160000, msg latency: 1173
>>
>> msg count: 170000, msg latency: 1254
>>
>> msg count: 180000, msg latency: 1322
>>
>> msg count: 190000, msg latency: 1399
>>
>> msg count: 200000, msg latency: 1451
>>
>> msg count: 210000, msg latency: 1506
>>
>> msg count: 220000, msg latency: 1582
>>
>> msg count: 230000, msg latency: 1661
>>
>> msg count: 240000, msg latency: 1709
>>
>> msg count: 250000, msg latency: 1786
>>
>> msg count: 260000, msg latency: 1834
>>
>> msg count: 270000, msg latency: 1947
>>
>> msg count: 280000, msg latency: 1994
>>
>> msg count: 290000, msg latency: 2043
>>
>> msg count: 300000, msg latency: 2132
>>
>> msg count: 310000, msg latency: 2183
>>
>> msg count: 320000, msg latency: 2232
>>
>> msg count: 330000, msg latency: 2316
>>
>> msg count: 340000, msg latency: 2370
>>
>> msg count: 350000, msg latency: 2449
>>
>> ^C
>>
>> ✘-130 *bent**@**quad* */Users/bent/nmsg/nanomsg.iojs * [*master*|● 1✚ 4]
>>
>> On Wed, Jan 21, 2015 at 1:45 AM, junyi sun <ccnusjy@xxxxxxxxx> wrote:
>>
>>> I think 50000 msg/s is good enough. I used to make performance test on
>>> Redis and memcached. Redis can reach 72000 msg/s, memcached can reach 25000
>>> msg/s.
>>>
>>> The speed of request/reply pattern is limited by the round trip cost of
>>> TCP.  If we want much higher qps, I think we should use asynchronous
>>> pattern, in which the users can register a callback function for request
>>> and pick the corresponding response when it arrived.
>>>
>>>
>>>
>>> On Tue, Jan 20, 2015 at 4:03 PM, Pierre Salmon <
>>> pierre.salmon@xxxxxxxxxxxxx> wrote:
>>>
>>>> For information, I already implemented this example and i obtained only
>>>> 50000 msg/s.
>>>>
>>>> Pierre
>>>>
>>>>
>>>> On 01/20/2015 03:37 AM, Garrett D'Amore wrote:
>>>>
>>>>> socket used by the worker.  That means you have to save the header and
>>>>> restore it — the device() routine has this logic, but you need to copy 
>>>>> that
>>>>> logic as appropriate, rat
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>>
>> Bent Cardan
>> nothingsatisfies.com | bent@xxxxxxxxxxxxxxxxxxxx
>>
>
>


-- 

Bent Cardan
nothingsatisfies.com | bent@xxxxxxxxxxxxxxxxxxxx

Other related posts: