Re: network bridge configuration for rumprun

  • From: Martin Lucina <martin@xxxxxxxxxx>
  • To: Anil Madhavapeddy <anil@xxxxxxxxxx>
  • Date: Mon, 29 Jun 2015 11:19:03 +0200

On Monday, 29.06.2015 at 10:08, Anil Madhavapeddy wrote:

On 29 Jun 2015, at 10:04, Antti Kantee <pooka@xxxxxx> wrote:

Hi,

So I was thinking about the problem of specifying a network bridge
configuration in rumprun, as motivated here:
http://wiki.xenproject.org/wiki/Upstream_QEMU_stubdom

It was easy enough to add bridge manipulation to netconfig, but that
doesn't really help us with specifying the configuration at lauchtime.
Assuming we want to avoid rumpctrl for the "more moving parts" reason,
there are, as far as I can see, two approaches (not really specific to
bridge configuration, so imagine it's a general problem):

1) improve "multiexecutable" support to allow for "barriers", i.e. wait for
all programs [..n-1] to finish before starting to run [n...]. Then bridge
configuration becomes someone else's problem.
2) add support to rumpconfig to be able to specify bridge config via json

For "1", the hard part is the syntax towards the user. Solving the problem
of passing a different argv[] to each "executable" within the unikernel is
probably close enough so that it should be solved at the same time, and the
whole simple matter unravels into a mess of solving everything.

For "2", it doesn't sound feasible to add support to the rumprun command
line syntax. Therefore, we'd have to add support for passing blocks of
handcrafted json. Do we want to go there? If yes, should the custom
blocks be handled by the monolithic rumpconfig, or should there be some
mechanism of linking in components which do their own json parsing, and
simply sending the custom blocks off to custom parsers? Support for custom
json handlers probably in the order of 25 lines of code, but what does it
do for user-perceived complexity?

I'm (almost) certain we'll need "1" sooner or later. I'm not (entirely)
convinced we'll need "2". Thoughts?

Bridge configuration is suitably varied that replicating the details is
almost always infeasible. We do 1) in Mirage, and you can retain low latency
setup by writing a handcrafted binary that calls the bridge ioctls directly
and avoids shell script forking.

The other benefit of just doing 1) sooner rather than later is that it will
permit other forms of setup in the future such as vchan channels for enclaves
of unikernels (e.g. miragetls+php+nginx)

I think Anil and Antti are talking about different things:

1) Antti is describing the "setting up a bridge topology
*inside* a rumprun unikernel", and the general problem is "configuring X, Y
and Z in a unikernel", with a view to re-using existing NetBSD
configuration tools for "X, Y and Z" where possible with no changes.

"X, Y and Z" could be bridge setup, wlan setup, encrypted block volume
setup, etc. etc. Basically anything (applicable to a unikernel) that a UNIX
would do at boot time.

2) Anil is describing the "setting up a bridge topology on the *host*
(dom0, unix, whatever) so that a Mirage unikernel can communicate with the
outside world". This is a different problem, and while it is also related
to rumprun, especially for qemu, it is a different discussion :-)

Am I right?

Other related posts: