[nanomsg] Re: Trying to implement "directory" pattern

  • From: Martin Sustrik <sustrik@xxxxxxxxxx>
  • To: <nanomsg@xxxxxxxxxxxxx>
  • Date: Mon, 04 Mar 2013 13:29:55 +0100

Hi Paul,

3. It is an optimization that can be done later without affecting users.
It can be done when it's need is demostrated, and when there are at
least one big customer that will use it at real scale.


I am not sure about this one. If it turns out that preventing "sideways" failure propagation cannot be realistically done, we'll have to think out of the box and possibly adjust the affected patterns in such a way as to cope with this scenario. If we do so, it'll affect the users. Let's rather
not ignore the problem.


In your statement its true :) I was talking about "full buffering" vs
"smart iteration" approach to subscriptions. In my case it's clearly an
"invisible" optimization.

Ah. Got you.

In general, I would say that we should expect some users to use large
subscription sets. The question, of course, is whether algorithms for such monster suscription sets should not be built on top of nanomsg using raw
PUB/SUB sockets.


At the first sight it's nice idea. However, raw sub socket, when forwarding a subscription has no way to know which pipes are in "pushback" state, and can't react based on that. So can't reliably deliver subscriptions upstream.

Ugh. Right. :(

We can also allocate only a fixed memory amount for holding subscriptions (the limit may be set by user). If the limit is exceeded, we can either
report error or switch the filtering off.

It's not clear is the limit is a size of the tree, or buffer size? If it's
size of the tree, then how large buffer can be (if it's filled by
unsubscriptions). If the limit is buffer size, then (apart from being what I've proposed in the first place), it may be lower than the size of the tree. But nevermind, we can have two different limits to cover all cases.

The other thing is not clear too. Reporting error of overflowing size of the tree is ok. However, as indicated above it's not enought. If we report
error of buffer overflow, then what can user do to it? Never do any
subscription from this point on? Close and reopen the whole socket? Ah, ok
in case of one publisher it would work (user knows which publisher is
failing, and reopening connection to failing pubslisher is not a problem either). Another option: switch filtering off. I don't even understand how it can work. The publisher already received some subscriptions. For him to know that filtering is off it should receive a message (which is stalled in pushback). We could close that overloaded connection and start a fresh new
one with filtering disabled, but I don't like this idea.

All in all, the limit of the number of subscriptions seems to be a nice
idea, but doesn't solve any problem by itself. Combining it with some
heuristics for buffer size and reconnect mechanics along the lines of what
I've described in previous mail would work for me.

Yes. It should be combined with the idea you've proposed in your previous email: Set up the buffer size to double of max subscription trie size. To do that you need to be able to set "max no. of subscriptions" options. If pushback happens, reconnect.

It sounds like it could possibly work (although I would rather go for something like 1.2x rather than 2x).

Martin

Other related posts: