[haiku-development] Re: Review needed for Sector Size not being reported correctly bug

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Mon, 23 Feb 2015 00:06:22 +0100

On 02/22/2015 08:51 PM, Kushal Singh wrote:

On Sun Feb 22 2015 at 8:55:14 PM Ingo Weinhold <ingo_weinhold@xxxxxx
<mailto:ingo_weinhold@xxxxxx>> wrote:

    Long story short, the file systems setting partition_data::block_size to
    their content block size is correct, or IOW your patch 0001 is incorrect
    I have only glanced over the comments in the ticket. From that and the
    not very helpful ticket description unfortunately I cannot say what the
    actual problem is that your patch 0001 is meant to fix. Your comment 17
    [1] seems to imply that when uninitializing/re-initializing a partition
    the block_size field isn't reset correctly.

That is one part of the problem that the value isn't reset correctly
after initialization (this can be solved if I initialize the value in
intel partition).
But the main problem is that suppose I have a bfs partition , In order
to make sure that an intel partition map can be initialized for it , the
CanInitialze() function (this was actually the part that needed
correction initially ) at
http://grok.bikemonkey.org/source/xref/haiku/src/add-ons/disk_systems/intel/PartitionMapAddOn.cpp#90

checks if it is possible or not (the function checks without actual
initialization). It uses BlockSize() function which should return sector
size  as commit  5f1b31debdf97da331be6e97a6c2bc5899e2423a
suggests. However that is not the case as the value that is returned via
BlockSize() is the file block size.

Indeed, that is incorrect. CanInitialize() should use the block size the partitioning system would use, if the partition were initialized with it, not the block size the current disk system uses.

Now the BlockSize() returns the value of fData->block_size. But the
value it gets is from the partition data at
http://grok.bikemonkey.org/source/xref/haiku/src/kits/storage/disk_device/MutablePartition.cpp#561
(which undoubtedly should be file block size now )
and so it does return value of file block size (but this is not what we
want , we want the return value = sector size) .

Agreed.

Now in order to fix the bug ( and the patch) I have the following
suggestions
1) Changing the return value of BlockSize() by using ioctl ( using
bytes_per_sector value, since anyway the value that we get now is surely
not what is expected there) for the return value.

Actually the value is what it is expected to be, the block size of the disk system the partition is currently initialized with.

2) Use the current patch 0001 and thus changing the interpretation of
partition->block_size value , note that the changes that I suggested in
my patch does change the interpretation but as you can see from the
search at
http://grok.bikemonkey.org/source/search?q=partition%5C-%5C%3Eblock_size&defs=&refs=&path=&hist=&project=haiku

the value of partition->block_size if viewed in a different perspective
does not affect the existing code in any harmful way ( in fact I don't
think does affect any major thing ).

I haven't check whether there aren't any (other) real users of the block_size field (e.g. BPartition::BlockSize() returns it as well). At any rate, changing the interpretation of the field, so that for every partition the block_size field would contain the block size of the device, would be highly redundant and the information about the content block size (even if currently only used informatively) would no longer be available. So I'm not in favor of this solution.


On 02/22/2015 10:31 PM, Kushal Singh wrote:

Also one more point that I missed in my most recent mail

On Sun Feb 22 2015 at 8:55:14 PM Ingo Weinhold <ingo_weinhold@xxxxxx> wrote:

    The block_size field doesn't have a corresponding content_block_size
    field. The rationale for this is that partitioning systems don't specify
    block sizes for partitions they define. They just have a block size they
    use for defining/interpreting partition offsets and sizes. So, in fact,
    they have a content block size, just as file systems have a content
    block size. Since there is no other block size for a partition, the
    field has simply been named block_size instead of content_block_size.

There is another block size value for a partition namely block
device
block size (aka sector).Kindly note that the first value noticed while
scanning ( as also seen in the syslog ) ( just before scanning) is the
sector size which after scanning turns to file block size. So initially
the value it gets ( after reading geometry of device) "is" the sector
size this value later gets overwritten to content block size.

Indeed, that's just because the disk device manager initializes it to *some* value (the same way the intel partitioning system initializes the child partitions' block_size value). Maybe it would be consequent to initialize the field e.g. to 0 instead.

Another suggestion ( for fixing the issue) that I probably missed in
my previous mail is
4)Adding a sector_size field. Not only would that solve the entire
problem , but on the other hand make it really easy to access.

Such a field doesn't make too much sense for most partitions (except only the root partition). Instead, as I suggested in my previous mail, I'd rather add a block_size field to [user_]disk_device_data, which would be the device's natural block size (I wouldn't call it "sector size").

The intel partitioning system kernel module could get the respective disk_device_data (get_disk_device()) and access the value fairly easily.

For the userland add-on it isn't that easy ATM. I would:

* add a BDiskDevice::DeviceBlockSize(), which just returns the user_disk_device_data::block_size field, and

* add a BMutablePartition::DeviceBlockSize(), which returns GetDelegate()->Partition()->Device()->DeviceBlockSize().

This way the intel partitioning system add-on can use BMutablePartition::DeviceBlockSize() in the methods where BlockSize() doesn't have the matching value yet.

CU, Ingo


Other related posts: