[openbeosstorage] Re: To Unmount Or Not To Unmount

  • From: Tyler Dauwalder <tyler@xxxxxxxxxxxxx>
  • To: openbeosstorage@xxxxxxxxxxxxx
  • Date: Fri, 26 Sep 2003 14:33:55 -0700

> > > > > This seems like an
> > > > > (almost :-) reasonable restriction to me: once you start planning a
> > > > > set
> > > > > of jobs that involve a partition, you can't mount/unmount it until 
> > > > > you
> > > > > cancel or no more
> > > > > jobs remain that affect that partition.
> > > 
> > > Please don't mix that up. Until you commit the changes you're in the
> > > situation 2) (and there aren't any jobs yet), that is, in principle,
> > > there'd be no problem with mounting/unmounting immediately. At least 
> > > from
> > > the DDM point of view.
> > 
> > No, I wasn't mixed up (at least, I don't think I was :-). I was saying it
> > would seem reasonable to me to disallow mounting and unmounting anything 
> > on
> > a device from the time PrepareModifications() was called on a device until
> > no shadow partitions were left for it, and then disallow mounting and
> > unmounting any busy partitions as long as they were still marked busy, 
> > just
> > as is implied by the suggestion below that Mount() and Unmount() call
> > PrepareModifications() on their own as needed.
> 
> I see. That's rather restrictive, but would be the easiest to implement
> variant, I believe. Though, unless there are significant differences in
> complexity of the different possible handlings, I think, that shouldn't be a
> criterion to affect the decision for or against any one.

No, I was just saying I could accept that sort of behaviour if necessary.

> > > > > We could add an appropriate
> > > > > error
> > > > > code so we could give a reasonable error message in Tracker and
> > > > > mount()/unmount(), suggesting that "The partition is locked, perhaps
> > > > > you're
> > > > > mucking around with things in DriveSetup?" similar to the busy
> > > > > message you
> > > > > get when you try to unmount a CD you're currently playing in
> > > > > SoundPlay. The
> > > > > only problem I see with this so far is that it does seems better to
> > > > > allow a
> > > > > partition to be unmounted even if jobs are being prepared for it, 
> > > > > but
> > > > > perhaps this would be justification for a hybrid approach with
> > > > > Unmount() as
> > > > > addressed below.
> > > > 
> > > > Not sure about that - the jobs in question should know about the drive
> > > > status; I could imagine that their operation is different when the 
> > > > disk
> > > > is mounted vs. unmounted.
> > > > Maybe we should think about that deeply, first, too :-)
> > > 
> > > Well, as far as the jobs go, there are no choices. As soon as they are
> > > created, the partitions that are going to be affected are marked busy 
> > > and
> > > mounting/unmounting and modification requests must fail.
> > > That point is, that the jobs are only created when the modifications are
> > > committed. Until then, we are free to mount/unmount would-be affected
> > > partitions.
> > 
> > I think his point was that, for example, whoever is responsible for 
> > resizing
> > the bfs partition is likely going to want to know whether the partition is
> > mounted or not, assuming resizing while mounted is supported.
> 
> Maybe my understanding suffered a bit from too much alcohol or sleep
> deprivation, but I can't see, where the problem is suspected. Of course
> anyone involved in the process can check the status of the device/partition.
> For the disk system modules it is available through
> {disk_device,partition}_data structures (either passed directly to the
> respective hook or gettable via the exported DDM C functions), for the jobs
> and the rest of the DDM code KDiskDevice/KPartition hold that info, and --
> though not first hand -- in userland the BDiskDevice/BPartition classes
> provide it.

I think perhaps he's just wanting to make sure that, say, you don't create a 
resizing job that's expecting an unmounted partition, but by the time the job 
actually starts, the partition has been mounted. As long as the partition is 
first marked busy before the job is actually created, though, that shouldn't 
be a problem (and this is assuming the job doesn't bother to check the 
partition status itself, which, as you mentioned, it can).

> [...]
> > > > > > Oh, there's still the general question whether the user has to
> > > > > > Unmount()
> > > > > > the concerned partition explicitely, when e.g. CanResize()
> > > > > > reported, that
> > > > > > it works only when the partition is unmounted, or whether Resize()
> > > > > > would
> > > > > > automagically do the unmounting (or at least mark the partition to
> > > > > > be
> > > > > > unmounted).
> > > > > Perhaps an unmount call should just be placed in from of the
> > > > > offending job
> > > > > in the job queue when it's built.
> > > 
> > > What I was thinking of, was, that when a job gets executed, it would
> > > itself
> > > first invoke the respective Validate*() hooks of the disk system again 
> > > and
> > > learn, if the disk system needs a currently mounted partition unmounted,
> > > or
> > > if something's fishy and it should rather fail on the spot.
> > > 
> > > So, if it realizes that the partition is mounted, but the disk system 
> > > can
> > > only operate on it when unmounted, it could either a) unmount the
> > > partition
> > > or b) fail.
> > 
> > Or c) block and request that the user either unmount the partition or
> > cancel. Or d) block and warn the user that when they click "unmount", the
> > partition will be promptly unmounted, thus they should close anything from
> > that partition that they currently have open (or risk losing data) or else
> > click "cancel" to cancel the job queue.
> >
> > > The problem with a) is, that it's a bit beyond the control of
> > > the user. If, as Axel says, unmounting would always work not heeding
> > > pending vnodes, then the user might risk losing data, I suppose.
> > 
> > I agree, that's a bit mean to pull the rug out from under the user like
> > that. b) seems like a weak solution, too, though. c) and d) seem nicer to
> > me. Is there any reason you can think of why either couldn't be made to
> > work?
> 
> Why, yes. The job is executed in the kernel and while it would be no problem
> to block it, I see some difficulties, how the user should be notified (and
> how to even get feedback from them). If we had the Interface Kit available 
> in
> the kernel (not that I'm recommending that; surely not :-), we could simply
> pop up an alert, but with the design as is, that can't be done without some
> additional userland service the kernel could use.

No, I wasn't intending to have the kernel actually pop up the alert, just 
somehow communicate to the corresponding userland process what's going on.

> Mmh, we already discussed some time ago, how to move the (kernel)
> notification message delivery to the registrar. The registrar would register
> via a to be defined interface with the kernel as an entity providing the
> notification message delivery service. In the same manner some server/daemon
> (in doubt also the registrar) could register as kernel-user interaction
> service. For simple multiple choice feedback requests (alerts) that 
> shouldn't
> be that hard to do.

I was thinking more along the lines of the user process managing the disk 
device jobs (i.e. DriveSetup, or whoever it is using the DiskDevice API) 
registering to be notified when such feedback requests needed to be made. 
Tracker could register to handle such cases where a user tries to manually 
unmount a partition with open file descriptors, DriveSetup could handle cases 
where a mounted partition is scheduled to be resized by an add-on that only 
supports offline resizing, etc.  

> > > > What would we gain because of the latter?`
> > > 
> > > Obviously that one can unmount a partition, while another app is 
> > > preparing
> > > modifications for the same disk device (mind you, it doesn't even need 
> > > to
> > > be the same partition, nor does it play a role whether any changes have
> > > been made at all, yet).
> > > 
> > > Just to throw another alternative onto the pitch: The Mount()/Unmount()
> > > methods could have a boolean parameter indicating whether the operation
> > > shall be carried out immediately or being queued. Not sure, if that 
> > > makes
> > > any sense in case of Mount(), though. Usually invoking the modifying
> > > methods on a BPartition object indeed immediately changes the object.
> > > That's a bit complicated for Mount(), since one can't get a valid dev_t
> > > until the partition is really mounted.
> > > 
> > > So, maybe only a `bool immediately = true' parameter for Unmount(), 
> > > while
> > > Mount() would always work immediately and fail, if the partition was
> > > modified?
> > 
> > That sounds okay to me. So, mounting would fail if the partition in 
> > question
> > is busy.
> 
> If you mean by `busy' that the partition will be affected by scheduled (or
> already executing) jobs (as I do), then it should definitely fail.

Yes, I'm trying to use the proper vocabulary now that I'm catching on. :-)

> But also,
> if the BPartition you're invoking Mount() on has been modified, even if the
> changes have not been committed yet. The reason is to avoid confusing
> situations like that you invoke Uninitialize() on a partition (but not yet
> CommitModifications()) and the subsequent Mount() would succeed 
> nevertheless.
> (Situation 2a BTW.)

One could also argue that by doing that, you're treating Mount() like a 
queuable operation, even though it acts (or fails) immediately, which is 
confusing. Especially since (unless I'm wrong here, but you didn't say 
otherwise elsewhere) an Unmount(immediate=true) call would succeed in the 
place of the Mount() call in the above example, while an 
Unmount(immediate=false) call would fail (at the time of the job). In other 
words, Mount() is always immediate, so I find the idea of basing its success 
off the status of the shadow partitions a bit confusing. It seems to me like 
Mount() and Unmount(immediate=true), being immediate, should have semantics 
that are independent of modifications being prepared in the same BPartition 
hierarchy.

> Though, if invoked on another BPartition referring to the same physical
> partition, the Mount() might succeed even if another user is preparing
> modifications for the device/partition (situation 2b, and when considering
> mount() as well, also 2c), depending on how we decide. If I understood you
> correctly, you'd find it acceptable (reasonable? desirable?), if the Mount()
> would fail in that case (even if the modifications the other user is
> preparing for the disk device wouldn't affect the partition in question?).

I would find it reasonably acceptable to fail, but desirable to succeed. In 
other words, 2b and 2c succeeding as described sounds good to me. :-)

> > Immediate unmounting would fail is the partition is busy
> 
> Definitely.
> 
> >, but
> > queued unmounting would be part of the job queue and thus succeed always.
> > Correct?
> 
> If we also add a `force' parameter and `true' is supplied, then you are
> correct. If `false' is supplied and there are open nodes when the attempt to
> unmount the partition is made, then the job (and subsequent jobs in the same
> queue) would fail. 

Yes, that's precisely what I was hoping for. :-)

> If we add something like a kernel-user feedback feature,
> then we might have more options, e.g. dropping the `force' parameter and
> rather ask the user whether to force the unmount, or keep it and ask the
> user, if `false' was supplied...

Exactly.

-Tyler

Other related posts: