On Tuesday, 16.06.2015 at 10:35, Antti Kantee wrote:
I have no attachment to this particular *implementation*, in fact I'd be
the first to throw it out and replace with something like SquashFS.
I read between the lines that you don't really even want the tarball
method (correct me if I'm interpreting too much).
Wouldn't a zero-effort driver implementation have been easier than an
adhoc tarball implementation, i.e. use MFS to mount an in-memory FFS
image? We'd need to bundle newfs and fs-utils, but we want to do that
*anyway* for other purposes. It seems like a double win over adding
something you don't really want.
Therefore, if I want to run a minimal and small[*] service as a rumprun
unikernel in the cloud *today*, and that service does not need any
persistent data, why should I bother with an (external) block device at all?
Ok, good reason. Can you point me to a resource giving an overview?
I clearly need to read up on the subject.
You too are confusing a particular way of bundling the binary+fs
with the concept of it.
I absolutely think that we should be able to distribute a single
file image. The problem is that to launch it, you must somehow
distribute the rumprun parameters too. So I'm a bit confused as to
how including the data but not the configuration in the image gives
you what you actually want. That's the whole thing that bugs me. I
can't wrap my head around the different configuration spaces, which
*all* contribute to what happens at runtime:
1: binary code
2: block device data
3: rumprun launch parameters, including application command line
(4: now-proposed alternative way to supply data)
I get the feeling that we aren't thinking hard enough outside of the
context of the toolchain that we already have to obtain the solution
that we really want.
Doesn't spawning a qemu for networking take time? Or do you want to
run a kernel without any I/O capabilities?
I can make the implementation even more minimal and allow for including
only a single directory tree if it bothers you. The reason I implemented
multiple -R's was to provide a (granted, poor/minimal but working) ability
to accomplish:
rumpbake -R ...path/to/default/root -R data/ ...
ie. Give us the ability to provide a default /etc in the rumprun
repository.
So with a single -R parameter, is the user is responsible for
figuring out where to get /etc from?
Is rumpbake really the best tool for handling the complexity? How
about a builtin file system tool which supports something like
"prepopulate", <user uses normal shell commands here>, "slurp"?
(throwing out an unchewed idea, not saying it's a good one)