[nanomsg] interop testing and benchmarks

  • From: Garrett D'Amore <garrett@xxxxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Tue, 25 Mar 2014 12:09:03 -0700

As there is now more than a single implementation of nanomsg (er.. SP), it 
seems that it would be useful to have some basis for comparing implementations. 
 It also also seems that it would be useful to have a “conformance” suite of 
tests, so that new implementations can be tested against each other (or at 
least against the “reference”, which in this case I presume to be libnanomsg.)

Has anyone given any thought to this?  Conformance *could* be something as 
simple as a wrapper around nanocat, but I suspect we might want to add a bit 
more to it, to validate things that are easier to check from native C than 
indirectly via nanocat (think especially about edge case validation.) It seems 
that this is more than just a task to write code, but also a documentation 
effort, since one would have to know what the conformance test is going to test 
for.  (For example, if a REQ/REQP is used, then there has to be agreement about 
what the conformance test involves.  Including details like port numbers or IPC 
paths, message contents, etc.

I’m motivated to make such a test suite and document it, treating libnanomsg as 
the “reference”.  But this might be something someone else wants to contribute 
to?


I’m also thinking that it would be useful to have some kind of benchmark tests 
as well.  How fast can an implementation respond to REQ/REP, bandwidth, etc.  
This can serve less as a marketing tool, and more as a tool to help 
implementors identify bottlenecks, and put effort on fixing them.  This could 
even be handled as part of a “conformance” specification.   Both sides of a 
protocol should be specified, so that it should be possible to mix and match, 
and measure all four possible combinations of client/server resulting from just 
two different implementations.

Opinions?

        - Garrett


Other related posts: