Martin Sustrik wrote: > On 04/05/14 07:52, Garrett D'Amore wrote: > >> There’s another reason for this that I hadn’t considered, which is >> “late attach” of a publisher — you have to send all previous >> subscription requests to any publisher that is newly connected. If >> there is a lot of sub/unsub activity on the socket, this may become >> too unwieldy. > > Yes. There are also other kind of optimisation there, like not sending > duplicate subscriptions et c. The code is pretty messy. > >> And yes, then we can use a limit and if more than “n” such requests >> are made, just abandon publisher side filtering. > > The more I think of it the more this looks like the best solution so > far. The setting would have to be on both subscriber (send at most 32 > subscriptions upstream, if there are more of them, stop caring) and > publisher side (consider subsriber that issues more than 32 > subscriptions ill-behaved and disconnect it). > > It will also solve the problem of subscription resending: If there are > at most 32 subscriptions per consumer, it's not likely they will > overload the network. (Maybe we should also put some limit on the > subscription length though.) I see another way of managing it - negate the semantics of subscription. Instead of send-nothing-except-whitelist when the optimization is enabled, make it send-everything-except-blacklist. With that, subscribe messages _don't_ need to be reliable - you just send one when you get a message you don't want, since that indicates that the upstream didn't get it. With the multiple-types-of-filters support, we could have positive and negative variants of each - send-only-with-prefix versus exclude-with- prefix, for example. That would keep the inversion from causing an explosion of subscriptions. Since a reliable back-channel is no longer needed, all the blocking-up problems go away.