[openbeosstorage] Re: To Unmount Or Not To Unmount

  • From: Ingo Weinhold <bonefish@xxxxxxxxxxxxxxx>
  • To: openbeosstorage@xxxxxxxxxxxxx
  • Date: Fri, 26 Sep 2003 21:40:38 +0200

On 2003-09-26 at 08:26:47 [+0200], Tyler Dauwalder wrote:
> On 2003-09-23 at 14:09:44 [-0700], Ingo Weinhold wrote:
> > On 2003-09-23 at 20:38:24 [+0200], Axel Dörfler wrote:
[...]
> > > That's what I thought, too... that's what the integration with the VFS
> > > is all about, right?
> > 
> > I should clarify the situations that can occur:
> > 
> > 1) Partition is busy: There are scheduled or already executing jobs, that
> > affect the partition. Jobs are scheduled, when
> > BDiskDevice::CommitModifications() is invoked.
> > 
> > 2) A user has invoked BDiskDevice::PrepareModifications() (started a
> > transaction, to use that term) but not yet called
> > BDiskDevice::{Cancel,Commit}Modifications() (rolled back respectively
> > committed the transaction).
> > a) Mount()/Unmount() is invoked on a BPartition object belonging to the
> > very same BDiskDevice object.
> > b) Mount()/Unmount() is invoked on a BPartition object belonging to 
> > another
> > user (but of course referring to the same partition).
> > c) mount()/unmount() is invoked for the partition.
> > 
> > There's no further distinction of cases for 1), because in any case, the
> > same must happen: mount(), unmount(), Mount(), Unmount() must fail, for
> > otherwise we'd risk, that something bad will happen, if the mount state
> > changes behind the back of a job that operates on that partition.
> > 
> > What shall happen in the case 2)a)-c) was subject of my mail. :-)
> 
> I really did understand that, even if it perhaps didn't seem like it. :-)
> When you said, "On the other hand, mounting/unmounting can be done by a
> third party at any time, even if one is preparing modifications at that
> time", it just sounded as though you were implying a call to mount() or
> unmount() could go and do things behind the back of the ddm. That being
> said, having the cases enumerated is useful. :-)

Ah, OK, then I misunderstood you.

> > > > This seems like an
> > > > (almost :-) reasonable restriction to me: once you start planning a 
> > > > set
> > > > of jobs that involve a partition, you can't mount/unmount it until you
> > > > cancel or no more
> > > > jobs remain that affect that partition.
> > 
> > Please don't mix that up. Until you commit the changes you're in the
> > situation 2) (and there aren't any jobs yet), that is, in principle,
> > there'd be no problem with mounting/unmounting immediately. At least from
> > the DDM point of view.
> 
> No, I wasn't mixed up (at least, I don't think I was :-). I was saying it
> would seem reasonable to me to disallow mounting and unmounting anything on
> a device from the time PrepareModifications() was called on a device until
> no shadow partitions were left for it, and then disallow mounting and
> unmounting any busy partitions as long as they were still marked busy, just
> as is implied by the suggestion below that Mount() and Unmount() call
> PrepareModifications() on their own as needed.

I see. That's rather restrictive, but would be the easiest to implement 
variant, I believe. Though, unless there are significant differences in 
complexity of the different possible handlings, I think, that shouldn't be a 
criterion to affect the decision for or against any one.

> > > > We could add an appropriate
> > > > error
> > > > code so we could give a reasonable error message in Tracker and
> > > > mount()/unmount(), suggesting that "The partition is locked, perhaps
> > > > you're
> > > > mucking around with things in DriveSetup?" similar to the busy
> > > > message you
> > > > get when you try to unmount a CD you're currently playing in
> > > > SoundPlay. The
> > > > only problem I see with this so far is that it does seems better to
> > > > allow a
> > > > partition to be unmounted even if jobs are being prepared for it, but
> > > > perhaps this would be justification for a hybrid approach with
> > > > Unmount() as
> > > > addressed below.
> > > 
> > > Not sure about that - the jobs in question should know about the drive
> > > status; I could imagine that their operation is different when the disk
> > > is mounted vs. unmounted.
> > > Maybe we should think about that deeply, first, too :-)
> > 
> > Well, as far as the jobs go, there are no choices. As soon as they are
> > created, the partitions that are going to be affected are marked busy and
> > mounting/unmounting and modification requests must fail.
> > That point is, that the jobs are only created when the modifications are
> > committed. Until then, we are free to mount/unmount would-be affected
> > partitions.
> 
> I think his point was that, for example, whoever is responsible for resizing
> the bfs partition is likely going to want to know whether the partition is
> mounted or not, assuming resizing while mounted is supported.

Maybe my understanding suffered a bit from too much alcohol or sleep 
deprivation, but I can't see, where the problem is suspected. Of course 
anyone involved in the process can check the status of the device/partition. 
For the disk system modules it is available through 
{disk_device,partition}_data structures (either passed directly to the 
respective hook or gettable via the exported DDM C functions), for the jobs 
and the rest of the DDM code KDiskDevice/KPartition hold that info, and -- 
though not first hand -- in userland the BDiskDevice/BPartition classes 
provide it.

[...]
> > > > > Oh, there's still the general question whether the user has to
> > > > > Unmount()
> > > > > the concerned partition explicitely, when e.g. CanResize()
> > > > > reported, that
> > > > > it works only when the partition is unmounted, or whether Resize()
> > > > > would
> > > > > automagically do the unmounting (or at least mark the partition to
> > > > > be
> > > > > unmounted).
> > > > Perhaps an unmount call should just be placed in from of the
> > > > offending job
> > > > in the job queue when it's built.
> > 
> > What I was thinking of, was, that when a job gets executed, it would 
> > itself
> > first invoke the respective Validate*() hooks of the disk system again and
> > learn, if the disk system needs a currently mounted partition unmounted, 
> > or
> > if something's fishy and it should rather fail on the spot.
> > 
> > So, if it realizes that the partition is mounted, but the disk system can
> > only operate on it when unmounted, it could either a) unmount the 
> > partition
> > or b) fail.
> 
> Or c) block and request that the user either unmount the partition or
> cancel. Or d) block and warn the user that when they click "unmount", the
> partition will be promptly unmounted, thus they should close anything from
> that partition that they currently have open (or risk losing data) or else
> click "cancel" to cancel the job queue.
>
> > The problem with a) is, that it's a bit beyond the control of
> > the user. If, as Axel says, unmounting would always work not heeding
> > pending vnodes, then the user might risk losing data, I suppose.
> 
> I agree, that's a bit mean to pull the rug out from under the user like
> that. b) seems like a weak solution, too, though. c) and d) seem nicer to
> me. Is there any reason you can think of why either couldn't be made to 
> work?

Why, yes. The job is executed in the kernel and while it would be no problem 
to block it, I see some difficulties, how the user should be notified (and 
how to even get feedback from them). If we had the Interface Kit available in 
the kernel (not that I'm recommending that; surely not :-), we could simply 
pop up an alert, but with the design as is, that can't be done without some 
additional userland service the kernel could use.

Mmh, we already discussed some time ago, how to move the (kernel) 
notification message delivery to the registrar. The registrar would register 
via a to be defined interface with the kernel as an entity providing the 
notification message delivery service. In the same manner some server/daemon 
(in doubt also the registrar) could register as kernel-user interaction 
service. For simple multiple choice feedback requests (alerts) that shouldn't 
be that hard to do.

> > > What would we gain because of the latter?`
> > 
> > Obviously that one can unmount a partition, while another app is preparing
> > modifications for the same disk device (mind you, it doesn't even need to
> > be the same partition, nor does it play a role whether any changes have
> > been made at all, yet).
> > 
> > Just to throw another alternative onto the pitch: The Mount()/Unmount()
> > methods could have a boolean parameter indicating whether the operation
> > shall be carried out immediately or being queued. Not sure, if that makes
> > any sense in case of Mount(), though. Usually invoking the modifying
> > methods on a BPartition object indeed immediately changes the object.
> > That's a bit complicated for Mount(), since one can't get a valid dev_t
> > until the partition is really mounted.
> > 
> > So, maybe only a `bool immediately = true' parameter for Unmount(), while
> > Mount() would always work immediately and fail, if the partition was
> > modified?
> 
> That sounds okay to me. So, mounting would fail if the partition in question
> is busy.

If you mean by `busy' that the partition will be affected by scheduled (or 
already executing) jobs (as I do), then it should definitely fail. But also, 
if the BPartition you're invoking Mount() on has been modified, even if the 
changes have not been committed yet. The reason is to avoid confusing 
situations like that you invoke Uninitialize() on a partition (but not yet 
CommitModifications()) and the subsequent Mount() would succeed nevertheless. 
(Situation 2a BTW.)

Though, if invoked on another BPartition referring to the same physical 
partition, the Mount() might succeed even if another user is preparing 
modifications for the device/partition (situation 2b, and when considering 
mount() as well, also 2c), depending on how we decide. If I understood you 
correctly, you'd find it acceptable (reasonable? desirable?), if the Mount() 
would fail in that case (even if the modifications the other user is 
preparing for the disk device wouldn't affect the partition in question?).

> Immediate unmounting would fail is the partition is busy

Definitely.

>, but
> queued unmounting would be part of the job queue and thus succeed always.
> Correct?

If we also add a `force' parameter and `true' is supplied, then you are 
correct. If `false' is supplied and there are open nodes when the attempt to 
unmount the partition is made, then the job (and subsequent jobs in the same 
queue) would fail. If we add something like a kernel-user feedback feature, 
then we might have more options, e.g. dropping the `force' parameter and 
rather ask the user whether to force the unmount, or keep it and ask the 
user, if `false' was supplied...

CU, Ingo

Other related posts: