[nanomsg] Re: benchmarks

  • From: Garrett D'Amore <garrett@xxxxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Fri, 6 Mar 2015 14:26:46 -0800

I’m quite supportive of this.  Think sort of a like SPECnano.  Ok I’m not 
really suggesting a SPEC benchmark, but if we had a standardized test suite 
that determined the protocols/payloads used, and the points of measurement, 
that would be awesome.  Ideally, this could be produced in such a way to plug 
in alternate implementations and generate standardized results.

(The results would necessarily be intrinsic to host platform(s), etc. but if 
the test suite included its own “baseline” which runs, then other results could 
be compared against it.)

         - Garrett

> On Mar 6, 2015, at 12:54 PM, Drew Crawford <drew@xxxxxxxxxxxxxxxxxx> wrote:
> 
> I’ve noticed the chart doesn’t document message size for the latency numbers.
> 
> In support of profile-driven development, I’d be interested in getting some 
> kind of basic benchmarking suite that could be used to compare 
> implementations.  Just “basic” stuff like latency, throughput, etc., with a 
> common definition of what the test is (roundtrip or not, message size, etc.)  
> Ideally the suite would be included in the distributions themselves so people 
> could play along at home in their own environments.
> 
> Such a move would also bring some empiricism to our arguments.  I have at 
> times objected to loop detection, etc. “on performance grounds” but I don’t 
> actually have a good sense of the magnitude of the impact because there is no 
> straightforward way to measure it in the existing implementations.
> 
> The subtext here is that I am shipping a benchmarking suite for my 
> implementation and I see some value in doing something standardized here, 
> that covers a broader class of performance goals, then to pretend what I come 
> up with by myself is representative.
> 
> Drew
> 
>> On Feb 14, 2015, at 9:56 AM, Garrett D'Amore <garrett@xxxxxxxxxx 
>> <mailto:garrett@xxxxxxxxxx>> wrote:
>> 
>> Btw the hardware in that case was a 2014 iMac and op is a full req/rep 
>> exchange. I.e. I start the clock on transmit of the req and stop it on recv 
>> of the reply. 
>> 
>> And even my latest post back in April is old.  We get better numbers now and 
>> mangos has new options to help tune the dial between throughput and latency. 
>> And newer Go versions have helped as well. 
>> 
>> I also now understand the dips around IPC.  Basically perf sucks there 
>> because the MTU is usually a small 576 bytes or somesuch.  Leading to 
>> multiple system calls.  Using it therefore is suboptimal for the cases where 
>> you have larger payloads. 
>> 
>> I believe Tyler Treat also has done some benchmarking here as well.   You 
>> can see more about that here : 
>> http://www.bravenewgeek.com/benchmark-responsibly/ 
>> <http://www.bravenewgeek.com/benchmark-responsibly/>
>> 
>> Sent from my iPhone
>> 
>> On Feb 14, 2015, at 2:33 AM, Drew Crawford <drew@xxxxxxxxxxxxxxxxxx 
>> <mailto:drew@xxxxxxxxxxxxxxxxxx>> wrote:
>> 
>>> Are there any benchmarks published for nanomsg / mangos?
>>> 
>>> As is sort of an open secret I’m working on a new speed-focused 
>>> implementation of some of the SP protocols.  I got the tests passing on the 
>>> first few pieces of my architecture today and measured some initial 
>>> timings.  However I don’t really have a standard of comparison to know “am 
>>> I fast yet” and by how much.
>>> 
>>> I did dig up 
>>> http://itnewscast.com/servers-storage/early-performance-numbers 
>>> <http://itnewscast.com/servers-storage/early-performance-numbers> but that 
>>> is old and leaves out some key details (like what is an “op” and what kind 
>>> of hardware was used).  
>>> 
>>> However if those are the current order-of-magnitude timings for req/rep on 
>>> modern Intel hardware then let’s just say I’m a happy camper...
> 

Other related posts: