[nanomsg] Re: end-to-end security

  • From: Alex Elsayed <eternaleye@xxxxxxxxx>
  • To: nanomsg@xxxxxxxxxxxxx
  • Date: Wed, 12 Mar 2014 12:15:51 -0700

Garrett D'Amore wrote:

> On March 12, 2014 at 8:42:32 AM, Alex Elsayed (eternaleye@xxxxxxxxx)
> wrote:
>> Sure!
>> Okay, first let's examine the constraints OpenPGP works under when used
>> for email (note, this also applies to S/MIME, so there are ways in which
>> this could have a unified trust architecture with transport-level stuff.
>> Both have standards-track RFCs.).
>> OpenPGP email sends a single message to one or more peers with known
>> public keys, without any (protocol-relevant) replies from those peers. It
>> includes sufficient information in the message for those peers to look up
>> the public key of the sender. It can encrypt to more than one peer at
>> once due to a hybrid design of using public-key cryptography to encrypt a
>> symmetric key, and encrypting the message with that - just encrypt the
>> symmetric key multiple times _on the same encrypted envelope_ (once for
>> each public key), reducing bloat by not duplicating the message. It
>> requires some method of looking up keys from key IDs, usually via a user
>> keyring and frequently also via keyservers.
>> All of the requirements can be satisfied in a REQ/REP setting:
>> - Single message can be chained into an exchange; the reverse is not true
>> which is why CurveCP won't work here
>> - Public keys of all peers which the message might reach (i.e. load-
>> balancing that endpoint) can be retrieved from the management interface
>> - For reply, the management interface can take the key ID in the message
>> to look up the key for the return path
>> - Message size doesn't inflate _too_ badly because whole-message
>> duplication is avoided
>> Overall, REQ/REP has a lot of similarity to email - the scale of latency
>> is different, but you can pretty easily see XREQ/XREP as intermediary
>> mail relays, REQ as the initiating MTA, and REP as the receiving mail
>> server. There might be a load-balanced set of such mail servers, or
>> multiple people sharing the email address - so you encrypt to all
>> potential valid recipients and any can open it.
>> Since OpenPGP and S/MIME are actually one-way, I'm starting to wonder if
>> this is a system that might work across SPs; implementing it at the
>> underlying level of the SP protocol itself might (surprisingly to me) be
>> workable.
> Seems like a lot of additional complexity.  In particular, you have to
> solve the the key lookup, and you’re requiring the *sender* to send the
> encrypted session key to each party every time.  Also, PGP/SMIME are not
> “secure” against replay.  While for mail this is fine (we actually would
> prefer to have mail duplicates rather than lose mail, after all!), for
> other things — like an RPC — replays could be tragically bad.  You need a
> nonce or time stamp (and in the case of a nonce, you need to have some
> agreement — which is part of the painful handshake that SSL/TLS does — to
> ensure that one side can’t choose the nonce on his own.)  Nonce agreement
> isn’t practical for store-and-forward, at least not without some extra
> work — because you really do need a full duplex exchange to get an
> agreement.

Key lookup will be necessary in any end-to-end system for nanomsg - you 
either need a.) a round trip to exchange keys b.) static keys or c.) a key 

a.) is impossible with SP semantics, b.) is imposible with dynamic 
join/leave, which means c.) is the only option.

Session-key-with-message is also impossible to avoid in e2e.

Replay is an issue that would need addressed, yes - timestamps require clock 
synchronization; nonces require synchronizing the state of seen-nonces 
between load-balanced endpoints. This is nontrivial However, it _is_ no 
worse than unencrypted even with replay.

> The one advantage of a system like this is that it can answer end-to-end
> security, with peers having no access to payload contents.  That’s a good
> thing when your architecture involves untrusted bits of routing fabric in
> the middle.

It also provides different guarantees than transport-layer encryption, which 
is part of why I favor supporting _both_ - for one, with transport-layer if 
any key is compromised all traffic is compromised, since they can then tap 
the SP messages and forward them out. With e2e, they cannot - but they can 
manipulate routing information to deny service, or can trigger replays.

I honestly feel that a meaningful security architecture will have make use 
of both transport-layer _and_ e2e security.

> I suspect that in practice, deployment involving bits and pieces
> (devices!) that *aren’t* trusted in between peers is probably somewhat
> unusual.  Drew may have use cases that show otherwise (his experience
> seems geared to the mobile space.)  In fact, the more I think about Drew’s
> concerns about the mobile space, the more I have questions — which I will
> ask at the end of this message.

I agree; part of why I favor using both methods is that transport-layer 
makes the boundaries of security domains more apparent, while e2e makes 
boundaries of right-to-know more apparent. Both are valuable.

> In a fabric/topology where all parties are reasonably trusted to see each
> others messages, you only need to keep hostile parties out of the fabric. 
> (Like a VPN scenario.)  In this case, transport security can solve the
> need.


> For pseudo-multicast (PUB/SUB), the broadcaster will have to resend
> / reencrypt the message for each subscriber.  This would be wasteful if we
> could have used a multicast transport (we don’t support that now in
> nanomsg), and it wastes CPU cycles on the transmitter.  But it avoids
> sending n encrypted sessions to each subscriber, too, which is a good
> thing. :-)

There are costs involved, yes. Then again, who is to say that the entire 
application needs to use the same level of e2e? If you have an _internal_ 
system that is e2e from producer to consumer, which then publishes without 
e2e to the _public_, you have still gained - that gateway can be assured 
that the data it is republishing has not been tampered with.

> I don’t believe we’re serious about trying to make nanomsg utilize IP
> level multicast, are we? Even for pub/sub it seems like IP multicast
> creates more problems than it really solves (mostly because IP multicast
> is unevenly supported across increasingly fragmented IP/NAT’d networks,
> and limited or broken router/gateway firmwares.)

Agreed, although there are more reasons to use datagram transports than 
multicast. In particular, multiplexing of channels over unreliable datagrams 
sidesteps the head-of-line-blocking horrorshow that happens when you try to 
multiplex over TCP. See also the 'minion' suite of protocols, which turn TCP 
(or TLS!) into something with unreliable-datagram semantics (which are 
useful) without changing the wire format (allowing it to be deployed even 
over networks that try to "helpfully" munge your data).

> Now, my “system engineering” questions for Drew — and are not specific to
> nanomsg at all — and this is really because I want to better understand
> the problem where TLS is a poor fit (the mobile space has been cited). 
> Here’s my thinking so far, which leads me to more questions than answers:
> 1. Latency has been cited as a concern (specifically TLS setup/handshake).
>  I’m having a hard time imagining an application where you have a real
> concern about latency, and are simultaneously unable to maintain a
> connection.  In the mobile apps I’m familiar with, its usually an
> either/or.
> 2. A background app — e.g. twitter feed monitor — can usually accept
> latencies of up to 10s of seconds.  A few hundred msec is certainly no
> problem at all.  (Question: maybe the *server* needs lower latency
> notifications from mobile apps?)
> 3. A realtime app — e.g. a game — generally runs in the foreground with
> the *user’s* attention, and while its running in the foreground,
> maintaining a connection (TLS/SSL) is also pretty darn trivial.  The
> battery drain from keeping a foreground session alive while playing a game
> or interacting with a real-time app should hardly be a concern.
> 4. I think there is also TLS/SSL session resumption designed to avoid some
> of the costs.  Not sure what platforms actually have it implemented
> though.

Pretty much all of them. It's such a huge latency/power-use win that it's 
very very common.

> 5. I agree that RSA (and other “factorization of products of large primes”
> based algorithms) is rather expensive — both compute and latency.  And the
> SSL/TLS streaming ciphers might be a bit expensive.  But I think there are
> lower cost optimized versions available for both asymmetric and symmetric
> portions of TLS (e.g. EC curve for key nego, and blowfish etc. for
> streaming).  Actually I suspect many modern platforms have AES in
> hardware. :-)  But anyway, we should be able to optimize the cipher
> choices for the platform without throwing away SSL/TLS altogether.

Yup. the various ECDHE_ECDSA_* ciphersuites are quite good on that count. 
However, blowfish is actually a pretty terrible choice when it comes to low-
latency key negotiation - it's so famously expensive it's been used as a 
password hash! In addition, Blowfish suffers from classes of weak keys, and 
is frequently disabled for that reason. AES is simply better there.

In addition, CipherSuites are monolithic - you can't mix-and-match key-
exchange and symmetric-alg freely, which is surprising to most. I don't 
think the combination of EC and Blowfish was ever given a code point, just 
like none of the Kerberos KEX modes has an AES-GCM variant.

Currently, some ChaCha-based stuff is passing through the IETF to become a 
TLS ciphersuite, and would be essentially ideal for his needs - it's near 
AES speeds even when AES has hardware help.

Do not use RC4, it's known to have serious flaws regarding biases in its 
output, which make a number of attacks possible.

> So, can Drew (or anyone else) help me with an example description of an
> application where the need for both low-latency and power-savings plays a
> major design factor?
> Thanks.
> - Garrett
> Martin Sustrik wrote:
>> Hash: SHA1
>> Hi Alex,
>> Very good analysis. It nicely demonstrates the point that building
>> security on SP level is a non-trivial problem.
>> It may be that we have to step back an look at the problem from 10,000
>> feet perspective: What is a topology? An interconnected cloud of
>> clients. What does security mean is such environment? Declining
>> unauthorised people to access the topology? Something more
>> fine-grained? Etc.
>> Btw, your suggestion that REQ/REP scenario is similar to PGP one is an
>> intrguing one. Can you elaborate?
>> Martin
>> On 11/03/14 23:45, Alex Elsayed wrote:
>>> Replies inline
>>> Drew Crawford wrote:
>>>> Hello folks,
>>>> I’ve written before to gauge the interest level on landing
>>>> encryption support to nanomsg.
>>>> After my last post, I tentatively decided to go with a
>>>> libzmq-based solution. However, for reasons outside the scope of
>>>> this list, that hasn’t gone as well as I’d liked, and I’m now
>>>> thinking about nanomsg once again.
>>>> The problem is important enough that I actually have time to work
>>>> on it, and due to time constraints I’m going to settle on some
>>>> solution in the next few days. The only open question at this
>>>> point is whether I’m going to land patches in nanomsg, or whether
>>>> I’m going to be doing some kind of private solution, like a
>>>> private fork or wrap of some library. I’d prefer the former if
>>>> possible.
>>>> I’d like to make a concrete proposal for comment. As far as I
>>>> can tell, there hasn’t been further discussion on the subject of
>>>> encryption since my last post. Here is what I’m thinking on
>>>> design decisions:
>>>> End-to-end, “well-above-transport-layer” security. Don’t get me
>>>> wrong, there is a good case for transport-layer security. Zeromq
>>>> has used it with some success. I use it right now. The thing
>>>> is, I’ve become convinced it’s the wrong approach for **my** set
>>>> of problems.
>>> Alright, that's a fair enough thing to say...
>>>> Zeromq's support gets poor when you move out of TCP transport.
>>> ...but this implies to me that you are conflating a poor
>>> implementation with a poor approach.
>>> TLS works over any reliable in-order stream - if you have AF_UNIX
>>> SOCK_STREAM, then you have something TLS can be run over.
>>> DTLS works over any bidirectional datagram transport (and there are
>>> ways to make it work for unidirectional cases) - thus you can use
>>> In both cases, you just need to let the TLS library know how to
>>> send the data. Some make this easier than others; OpenSSL in
>>> particular is sadly burdened with a rather poor API. GnuTLS is
>>> nicer in various ways, but uses a license that makes it unlikely to
>>> be the first (or possibly even _a_) choice for nanomsg.
>>>> It would be a lot of work for them to support IPC, for example,
>>>> which I’m mildly interested in. I suspect that UDP is somewhat
>>>> challenging as well, which is a long-term goal.
>>> If that's the case, then the issue is a poor implementation in ZMQ.
>>> Not a limitation of TLS/DTLS - see above.
>>>> Doing security work near the surface means it’s completely
>>>> decoupled from adding new transports, which is good if you want
>>>> new transports, and also good if you want security to work with
>>>> them.
>>> Agreed, and doing security at the transport layer means it's
>>> decoupled from new SPs, and the same arguments apply. That's the
>>> reason I feel that _both_ should be implemented sooner or later.
>>>> Patches to the cryptography require deep knowledge of zeromq
>>>> internals, and the people with the right knowledge are often
>>>> busy.
>>> Patches to any cryptography require deep knowledge of the many
>>> pitfalls, and the people with the right knowledge are quite
>>> uncommon overall. It's the main reason that sticking with tested,
>>> well-known systems is so critical - changes that _seem_ small and
>>> inconsequential have time-and-again resulted in complete
>>> invalidation of the assumptions that the security of a system
>>> relies on.
>>>> When minor features to security are needed it creates major
>>>> delays.
>>> ...which, IMO, are better than minor changes to security leading to
>>> major losses of security.
>>>> If security sits near the surface it requires knowledge of mostly
>>>> public APIs and so cryptography work can proceed without
>>>> scheduling meetings with core committers to understand the
>>>> obscure internal design of the day.
>>> "Obscure internal design of the day" was one of the problems with
>>> ZMQ that inspired the creation of nanomsg in the first place - it's
>>> explicitly designed to be componentized, such that this kind of
>>> work _isn't_ arcane deep magic.
>>>> Focus on REQ/REP, and maybe DEVICE, which are the sockets I’m
>>>> interested in.
>>> The problem is that REQ/REP has some very hostile semantics when
>>> implementing encryption atop it. Incomplete list:
>>> - Requires 0-RTT key exchange (means forward secrecy is
>>> impossible) - Cannot assume two REQs go to the same endpoint. Thus,
>>> every single REQ must contain entire key exchange data (REP,
>>> however, may potentially reuse state in some cases. Requires
>>> study). This bloats small requests enormously.
>>>> The other socket types can wait until somebody is sufficiently
>>>> motivated to make security work for those socket types.
>>> Wholly agreed there. Transport security is per-transport, SP
>>> security is per-SP. Because of the wide variance in semantics
>>> between SPs, it's incredibly unlikely they can all provide the same
>>> security guarantees, much less use the same protocols.
>>>> Stick close to CurveCP where sensible, but allow for some
>>>> experimentation. Maybe the user can choose from several competing
>>>> security mechanisms.
>>> CurveCP will not work here. First of all, it's not 0-RTT. If you
>>> require a REQ/REP for key exchange before the data, your system is
>>> broken due to endpoint load-balancing. You need to bundle key data
>>> with your outgoing REQ, or at least sufficient identifiers for it
>>> to be looked up out-of-band (say, via the management interface).
>>> REP must do the same. The result looks more like OpenPGP than
>>> CurveCP - in fact, OpenPGP would work without any changes. Pity
>>> about the message inflation.
>>> In addition, you have the problem of key management. If you have a
>>> load- balanced set of REQ/REP endpoints, then do they all have the
>>> same key? If yes, that's a problem because you can't cut one out in
>>> the case of compromise etc. If no, you can't control which one the
>>> REQ gets routed to, and thus the sender must encrypt it to _every_
>>> potential recipient, resulting in a major amplification of both
>>> compute time and message size.
>>> I have yet to see someone suggest doing encryption over REQ/REP
>>> without completely ignoring the fundamental part of REQ/REP where
>>> it says that there are no guarantees of endpoint continuity between
>>> two REQs.
>>> If that is ignored it's easy! It's also wrong, broken, and
>>> insecure.
>>> <implementation details snipped>
>> Version: GnuPG v1.4.11 (GNU/Linux)
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>> j0QZWVoFg53RAnGfPYSFu3bDeoPCnVzQC7XhxEm/le/pdP5hTZ5vAvtishVkZZFd
>> t6Yr/ZaeCsD0cRf3SgI2B+ziCB32UlNtrBVAylRtyG/h0H3Y+DzOGu6yA/LQ5Z3g
>> vCwL+Bl6hWbMV/eSO88cK6PMyAMlUnCBu6pu0brIE1zNkuBHCLQ4zKS5ufUG2y1k
>> v7PPokkGLgLwORb2Zrcp/DhmpwqjNfJoCW3X2Vq8AwmBH3QAuqeYS+3YKDA+JOY=
>> =N6sT
>> -----END PGP SIGNATURE-----

Other related posts: