On 2014-11-20 17:20, Paul Colomiets wrote:
Well, what I really mean, is can I specify interface/ip address? I want that at least for security purposes. (I know I can use firewall, but I used to limit listening addresses).
Not yet, but can be added.
Same here: how would I use different tcpmuxd instance?
Individual tcpmuxd instances are bound to different TCP. To use the instance bound to port 5555, connect to port 5555. To use the instance bound to port 5556, connect to port 5556. Et c.
2. How it registers with tcpmuxd? What if there are multiple tcpmuxd running?Attempt to start second tcpmuxd fails. Registration happens over AF_UNIX sockets. The underlying file is /tmp/tcpmux-5555.ipcIf there are multiplexers for multiple ports running, they use different AF_UNIX address to communicate (replace 5555 by whatever port you are using.Okay. But I believe traditional host-port address should be used. Not just port number.
That's the case now: nn_bind (s, "tcpmux://*:5555/foo"); Although interface names except '*' are not yet supported.
Okay. I then service unregisters when unix socket closed, right?
Yes.
You mean expensive in terms of bytes transferred? If so, fair enough.However, most arguments against multiple connections don't really stand.I've written a blog about that in the past: http://250bpm.com/blog:18Not exactly in terms of bytes transferred. It's ok to open any number of connections between two services, even if they speak websockets. But with browsers: 1. The tend to have limits on number of connections (you can have few tens of services behind a single gateway, and user having 50-100 tabs is not so uncommon).
That's ugly. So there's an artificial limit on number of connections that forces us to replace time-tested TCP multiplexing with some kind of hack of our own for no good reason, no performance improvement or such.
2. Websocket connections are usually pinged to keep them closed fast in case of closing client connection, so traffic is huge 3. Allowing lots of connections per host make server easy DoS attacked (remember NAT'd clients) 4. Some companies have hundreds of employees behind NAT, so can easily exhaust number of ports at client-side
I wonder what's the conclusion from 2,3,4? To not use websockets and rather use raw TCP services spread over multiple ports?
4. There are also security checks (for Sec-WebSocket-Origin) that havesomehow be configuredDunno. That particular check seems to be pretty weak. AFAICS the client canfake any origin it wants. Therefore we can just allow all origins by default. If needed, we can add some checks later on.It's not weak. It prevents a javascript on web page http://malicious.attacker/ to connect websocket to http://gmail.com with your credentials. So yes, it's not check against bots. It's check against javascript on malicious web pages.
Right, some kind of whitelisting the origins can be added then.
I am wondering about what the alernative would be. Any ideas?The gateway, which does access auth checks and some sort of tunelling. It's kinda tcpmux on steroids. But, firstly to have some real authentication and for tunelling, the gateway must read and understand websocket packets itself If the gateway does packet-based routing, it may (I would argue it should) just prefix packet with authentication info and send that packet over persistent nanomsg connection. I.e. you shouldn't have nanomsg connection per websocket connection, but a persistent nanomsg connection between service and gateway. So it may be an nginx module, or standalone http/websockets server like zerogw or mongrel, but not a small and transparent one like tcpmuxd.
Ok, so there are 2 concerns, tunnelling and more rigorous initial handshaking, right.
I am still not convinced about the former (see above).As for the latter, can ZeroGW be re-purposed to act as a WebSocket multiplexer; i.e. can it, after doing the initial handshake, pass the connection to the application, the way that tcpmuxd does?
Martin