[nanomsg] Re: RFC links

  • From: Martin Sustrik <sustrik@xxxxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Wed, 21 Aug 2013 21:23:27 +0200

Alex, Paul,

I just wanted to say that I an not an expert in naming/discovery area so I can't really comment on the stuff.

I am only able to give some very high level requirements.

Basically, we need some kind of database to store the topology design. The database is updated by administrators and used by individual applications to find out what's their place in the topology, ie. who should they connect to or similar.

For example:

Application: "I am process X on box Y and I want to join topology XYZ."
Database: Please connect to tcp://server01:5555 with priority 5, connect to ipc://xyz with priority 3 and bind to tcp://*:5556 with priority 2."

The hard part, IMO, is that there are transient nodes in topologies which admins do not know about in advance. For exmaple, client applications or service instances that you start on the fly to handle the load.

These cannot be handled statically, such as by fixed DNS records. Rather there's need for some kind of dynamic (rule based? discovery?) system.

Martin

On 21/08/13 20:49, Paul Colomiets wrote:
Hi Alex,

On Wed, Aug 21, 2013 at 8:53 PM, Alex Elsayed <eternaleye@xxxxxxxxx> wrote:
RPCBIND is really quite horrid

Does this impression come from real usage or the theoretical one? I
know a little bit only in theory, so the practical experience would be
much appreciated.

It allocates the service ports dynamically (so
you have to use portreserve or the like to prevent it from taking something
important which is needed elsewhere)

That's not a big problem IMO. I can easily find an unused port range.

AND adds a roundtrip on connect because
it needs to ask the portmapper service what port the real service is running
on.


This is not problem either. My use cases for zeromq/nanomsg use
persistent connections for ages, so less than millisecond roundtrip at
connection initiation is not an issue. Sure, if you allocate lots of
short-lived connections you should have an option to skip rpcbind.

What I'd see for *local/internal* service listing is using
Avahi/Zeroconf/MDNS. It's based on DNS, but it's sent out by every
participating host. DNS for WAN, Zeroconf for LAN.


Maybe I'm paranoid. But Zeroconf looks too dangerous for production
stuff. Too easy to misuse (e.g. propagate development node with
production name or to forget disable mdns on failing node).

For a public service, you'd really want a real DNS name anyway, just like
(say) an MX record.


For public service yes. DNS is the best choice. But its much more
different problem space. For public service usually relatively few
gateway servers are allocated which:

1. Change very rarely, say once a day (in the real practice once a
year or so), comparing to hourly change of the number of computing
nodes (and subsecond changes in case of failover)

2. Implement failover by making client switch to lower priority
addresses, comparing to failover by removing and adding nodes for
in-data-center problems

So DNS matches this case well.

If this is a local network, then each host is announcing its own services over
Zeroconf, so just reconnecting to that host would fetch the updated  record.


If so, how would node know when to reconnect?



Other related posts: