[nanomsg] Re: RFC links

  • From: Martin Sustrik <sustrik@xxxxxxxxxx>
  • To: <nanomsg@xxxxxxxxxxxxx>
  • Date: Tue, 13 Aug 2013 17:06:02 +0200

Ok, AFAIU, what you want is a two-level service/component design.

The service would be bound to a fixed port number and forward messages to components based on name.

Technically, it could be done with classic brokers (there should be an option to run RabbitMQ on a user-specified port and have several instances running in parellel), TCPMUX (you can implement it in such a way that port to be used could be specified by the user) or whatever.

Anyway, the security vulnerability with classic TCPMUX was that someone could expose a new service without having to ask the admin first, right?

With two level service/component design you can expose a new component within an existing service without asking the admin first. Yet, it may be the same evil component as in the previous case. How is this more secure?

Martin

On 2013-08-13 16:43, Alex Elsayed wrote:
On Tuesday, August 13, 2013 09:31:59 AM Martin Sustrik wrote:
<snip>
You are basically advocating the classic message broker design (as in
RabbitMQ, MQ series etc.) Message broker is a component that serves as a
hub of all the communication, forwarding the messages to individual
services as required.

Mm, not quite. I see this as being per-service (only for the 'stocks' service) but for all of the components of that service. Also, I see this as something that would be used as a gateway to the wider internet - on the intranet, you don't need to worry so much about ports-through-the-firewall, and this isn't a broker in the 'one per network' sense - if you used this across everything,
it'd be 'one per host'

There's nothing bad about that per se, but ZeroMQ's and now nanomsg's
mission is to get rid of this kind of centralised design.

That being said, message broker has the same problems as TCPMUX-style
multiplexer:

1. It's using global namespace (5672 for AMQP, 1 for TCPMUX)

No, my intent was that this would be allocated a port for the service, and the components then share that. You'd pick a port for 'stocks' and then this would
mux heartbeat/updates/commands.

2. Namespace collisions (exchange names in AMQP, service names in
TCPMUX)

See above

3. Arbitrary TCP traffic passes through anyway, the only difference is
whether it goes to AMQP broker first, just to get forwarded to the
application, or directly to the application.

No, TCPMUX forwards TCP, and may carry any TCP protocol. This at least is SP and only SP. A number of firewalls do try to do deeper inspection - there's an
out-of-tree 'l7filter' patch for iptables, for instance.

The concern about portability of SCM_RIGHTS is a valid one though.
However, AFAICS every system (even Windows!) have some way to do that,
so we should be OK.

Okay, that's good to know.

In short, broker and multiplexer and functionally equivalent. The only difference is that broker is more complex, thus more brittle and also a bigger maintenance liability, it's a serious performance bottleneck etc.

This all assumes a central broker, rather than one that's more like a border checkpoint. Passing through the firewall kinda implies a SPOF anyway, and any methods of avoiding that apply here as well - anycast, round-robin, yadda yadda. Since the multiplexer is edge and not center, failover and such works just as well as any case where you have multiple XFOO/XBAR connected to the
same topology in the same place.

Take another look at what what I suggested as an API - that would give every host that called nn_bind(..., NN_MULTI) an instance of the broker. My thinking was along the lines of this internally spawning the device and doing ipc:// between that and the caller. This isn't intended to be some enterprise broker, it's intended to be at the host level so it has a much smaller portion of
traffic and far less temptation to add silly buffering junk.

Honestly, it could probably *also* be implemented just fine with socket- passing instead of XREQ/XREP - I wrote my message not knowing if SCM_RIGHTS
had equivalents on other platforms, but the reason I put the
component/topology ID in the protocol header instead of the message header is
for just that kind of possibility.

My feeling is that using a message broker would be more of a security
theater. In reality it allows any services inside the network to be
accessed from outside without appropriate approval. However, given that dev are going to present it to the admin like "we only want you to open
one TCP to let us use ActiveMQ", the admin would get impression that
only *one* service is open to the outer world, would sleep better in the
night and won't bother the devs too much about it.

That kind of thing is *why* my intent was for this to work like SP itself - user-assigned ports, and multiple instances of the multiplexer. That lets ports have a granularity closer to traditional TCP services, instead of a far coarser granularity (AMQP & TCPMUX having a single port) or a finer and thus inconvenient granularity (one port per SP instance) which causes the problems
you are trying to resolve.

By having the multiplexer be treated as an optional component at the edge used to reduce port allocations rather than a critical one at the center, it really does act differently than a central broker. If every participant uses it (which may be done to cut down on the mental port-allocation overhead), it's one multiplexer per host, not some central failure point of the *network*.

The multiplexer is unnecessary if port allocation is not a problem, because multiplexing doesn't occur and string naming devolves to case three (which is
why I brought it up earlier).

Other related posts: